Test Report: QEMU_macOS 18421

                    
                      715903ea5b86ab0a28d26e6fe572bd5327dfa9fc:2024-03-18:33639
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 41.34
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.03
36 TestAddons/Setup 10.23
37 TestCertOptions 10.14
38 TestCertExpiration 195.24
39 TestDockerFlags 10.02
40 TestForceSystemdFlag 10.23
41 TestForceSystemdEnv 10.09
47 TestErrorSpam/setup 9.91
56 TestFunctional/serial/StartWithProxy 10.05
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
70 TestFunctional/serial/MinikubeKubectlCmd 0.56
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.72
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.08
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.29
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
119 TestFunctional/parallel/ServiceCmd/Format 0.05
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 116.5
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.31
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.62
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 36.64
150 TestMultiControlPlane/serial/StartCluster 9.9
151 TestMultiControlPlane/serial/DeployApp 116.15
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.11
156 TestMultiControlPlane/serial/CopyFile 0.07
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
159 TestMultiControlPlane/serial/RestartSecondaryNode 56.52
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.25
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
164 TestMultiControlPlane/serial/StopCluster 3.44
165 TestMultiControlPlane/serial/RestartCluster 5.23
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
171 TestImageBuild/serial/Setup 9.82
174 TestJSONOutput/start/Command 9.85
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.31
206 TestMountStart/serial/StartWithMountFirst 11.02
209 TestMultiNode/serial/FreshStart2Nodes 9.93
210 TestMultiNode/serial/DeployApp2Nodes 120.44
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.11
215 TestMultiNode/serial/CopyFile 0.07
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 48.1
218 TestMultiNode/serial/RestartKeepsNodes 8.57
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 3.46
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.08
226 TestPreload 10.1
228 TestScheduledStopUnix 9.98
229 TestSkaffold 16.68
232 TestRunningBinaryUpgrade 639.8
234 TestKubernetesUpgrade 18.66
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.48
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.44
250 TestStoppedBinaryUpgrade/Upgrade 579.45
252 TestPause/serial/Start 9.82
262 TestNoKubernetes/serial/StartWithK8s 9.98
263 TestNoKubernetes/serial/StartWithStopK8s 5.87
264 TestNoKubernetes/serial/Start 5.92
268 TestNoKubernetes/serial/StartNoArgs 5.94
270 TestNetworkPlugins/group/auto/Start 9.78
271 TestNetworkPlugins/group/kindnet/Start 9.81
272 TestNetworkPlugins/group/calico/Start 9.93
273 TestNetworkPlugins/group/custom-flannel/Start 9.76
274 TestNetworkPlugins/group/false/Start 9.96
275 TestNetworkPlugins/group/enable-default-cni/Start 9.78
277 TestNetworkPlugins/group/flannel/Start 10.07
278 TestNetworkPlugins/group/bridge/Start 9.75
279 TestNetworkPlugins/group/kubenet/Start 11.4
281 TestStartStop/group/old-k8s-version/serial/FirstStart 11.91
283 TestStartStop/group/no-preload/serial/FirstStart 9.98
284 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
288 TestStartStop/group/old-k8s-version/serial/SecondStart 5.28
289 TestStartStop/group/no-preload/serial/DeployApp 0.09
290 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
292 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
293 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
294 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
295 TestStartStop/group/old-k8s-version/serial/Pause 0.11
297 TestStartStop/group/embed-certs/serial/FirstStart 10.18
299 TestStartStop/group/no-preload/serial/SecondStart 7.55
300 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
301 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.07
302 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
303 TestStartStop/group/no-preload/serial/Pause 0.11
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.27
306 TestStartStop/group/embed-certs/serial/DeployApp 0.1
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
310 TestStartStop/group/embed-certs/serial/SecondStart 5.92
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
315 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
316 TestStartStop/group/embed-certs/serial/Pause 0.12
319 TestStartStop/group/newest-cni/serial/FirstStart 9.87
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.67
324 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.07
327 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
328 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (41.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-993000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-993000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (41.340184917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0ea4f177-f1c6-4796-b128-cd4a4ea9c63e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-993000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c49266ef-a3b8-4af1-8cad-b2ce6d2ad945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18421"}}
	{"specversion":"1.0","id":"bae02e1b-aff1-45ad-b780-c010fae1fcc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig"}}
	{"specversion":"1.0","id":"be5a7b84-25fa-458e-9bd7-7d9a7c7ec0b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e8d983fd-be04-45e6-bf9a-3a46594c437b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cf329393-3cd0-4bf8-9d5d-337ef1efc7cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube"}}
	{"specversion":"1.0","id":"9d717b94-ec79-498d-b1a8-001e976653e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"537320f5-479c-4411-b418-808892fc4a77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7cba400e-b0a1-4e48-bf05-1863205de0bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"cd5b44fe-fbd0-4bd7-afaf-17bb029d6967","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9217cc8b-b613-4688-bd08-e718c049e8d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-993000\" primary control-plane node in \"download-only-993000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3aac778d-6867-473e-93f0-445305b031ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f67fbd19-c13b-4cb6-98d7-e8f2d458efd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520] Decompressors:map[bz2:0x140006dbbc0 gz:0x140006dbbc8 tar:0x140006dbb70 tar.bz2:0x140006dbb80 tar.gz:0x140006dbb90 tar.xz:0x140006dbba0 tar.zst:0x140006dbbb0 tbz2:0x140006dbb80 tgz:0x14
0006dbb90 txz:0x140006dbba0 tzst:0x140006dbbb0 xz:0x140006dbbd0 zip:0x140006dbbe0 zst:0x140006dbbd8] Getters:map[file:0x14002030d80 http:0x1400057e190 https:0x1400057e1e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"2859c12e-3e9f-4707-bc99-b9dc2b607cc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:28:38.465597    7238 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:28:38.465743    7238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:28:38.465747    7238 out.go:304] Setting ErrFile to fd 2...
	I0318 13:28:38.465749    7238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:28:38.466086    7238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	W0318 13:28:38.466206    7238 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18421-6777/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18421-6777/.minikube/config/config.json: no such file or directory
	I0318 13:28:38.467775    7238 out.go:298] Setting JSON to true
	I0318 13:28:38.489400    7238 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5290,"bootTime":1710788428,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:28:38.489465    7238 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:28:38.501770    7238 out.go:97] [download-only-993000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:28:38.504702    7238 out.go:169] MINIKUBE_LOCATION=18421
	I0318 13:28:38.501900    7238 notify.go:220] Checking for updates...
	W0318 13:28:38.501922    7238 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 13:28:38.526813    7238 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:28:38.529730    7238 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:28:38.533781    7238 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:28:38.537822    7238 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	W0318 13:28:38.544791    7238 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 13:28:38.545017    7238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:28:38.547722    7238 out.go:97] Using the qemu2 driver based on user configuration
	I0318 13:28:38.547746    7238 start.go:297] selected driver: qemu2
	I0318 13:28:38.547753    7238 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:28:38.547845    7238 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:28:38.550719    7238 out.go:169] Automatically selected the socket_vmnet network
	I0318 13:28:38.556263    7238 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 13:28:38.556389    7238 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 13:28:38.556472    7238 cni.go:84] Creating CNI manager for ""
	I0318 13:28:38.556494    7238 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 13:28:38.556548    7238 start.go:340] cluster config:
	{Name:download-only-993000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:28:38.561976    7238 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:28:38.565824    7238 out.go:97] Downloading VM boot image ...
	I0318 13:28:38.565863    7238 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso
	I0318 13:28:57.319991    7238 out.go:97] Starting "download-only-993000" primary control-plane node in "download-only-993000" cluster
	I0318 13:28:57.320030    7238 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 13:28:57.629276    7238 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 13:28:57.629323    7238 cache.go:56] Caching tarball of preloaded images
	I0318 13:28:57.630060    7238 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 13:28:57.635712    7238 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 13:28:57.635741    7238 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:28:58.252777    7238 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 13:29:18.690773    7238 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:29:18.690955    7238 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:29:19.388419    7238 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 13:29:19.388611    7238 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/download-only-993000/config.json ...
	I0318 13:29:19.388641    7238 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/download-only-993000/config.json: {Name:mk168a4f98d5d1e21683dd015f563fc2f060fdc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:29:19.389932    7238 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 13:29:19.390118    7238 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0318 13:29:19.726579    7238 out.go:169] 
	W0318 13:29:19.731766    7238 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520] Decompressors:map[bz2:0x140006dbbc0 gz:0x140006dbbc8 tar:0x140006dbb70 tar.bz2:0x140006dbb80 tar.gz:0x140006dbb90 tar.xz:0x140006dbba0 tar.zst:0x140006dbbb0 tbz2:0x140006dbb80 tgz:0x140006dbb90 txz:0x140006dbba0 tzst:0x140006dbbb0 xz:0x140006dbbd0 zip:0x140006dbbe0 zst:0x140006dbbd8] Getters:map[file:0x14002030d80 http:0x1400057e190 https:0x1400057e1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0318 13:29:19.731789    7238 out_reason.go:110] 
	W0318 13:29:19.739609    7238 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:29:19.743637    7238 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-993000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (41.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-926000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-926000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.863500958s)

                                                
                                                
-- stdout --
	* [offline-docker-926000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-926000" primary control-plane node in "offline-docker-926000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-926000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:43:01.034859    9286 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:43:01.035000    9286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:01.035003    9286 out.go:304] Setting ErrFile to fd 2...
	I0318 13:43:01.035005    9286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:01.035156    9286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:43:01.036364    9286 out.go:298] Setting JSON to false
	I0318 13:43:01.054308    9286 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6153,"bootTime":1710788428,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:43:01.054386    9286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:43:01.059635    9286 out.go:177] * [offline-docker-926000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:43:01.066663    9286 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:43:01.066716    9286 notify.go:220] Checking for updates...
	I0318 13:43:01.073533    9286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:43:01.076607    9286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:43:01.079603    9286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:43:01.082606    9286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:43:01.085619    9286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:43:01.088959    9286 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:43:01.089028    9286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:43:01.092585    9286 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:43:01.099625    9286 start.go:297] selected driver: qemu2
	I0318 13:43:01.099640    9286 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:43:01.099648    9286 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:43:01.101647    9286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:43:01.104517    9286 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:43:01.107666    9286 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:43:01.107698    9286 cni.go:84] Creating CNI manager for ""
	I0318 13:43:01.107704    9286 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:43:01.107707    9286 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:43:01.107742    9286 start.go:340] cluster config:
	{Name:offline-docker-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:43:01.112344    9286 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:43:01.119517    9286 out.go:177] * Starting "offline-docker-926000" primary control-plane node in "offline-docker-926000" cluster
	I0318 13:43:01.123518    9286 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:43:01.123547    9286 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:43:01.123557    9286 cache.go:56] Caching tarball of preloaded images
	I0318 13:43:01.123629    9286 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:43:01.123635    9286 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:43:01.123701    9286 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/offline-docker-926000/config.json ...
	I0318 13:43:01.123713    9286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/offline-docker-926000/config.json: {Name:mk9be29b049c9a71013cb6d0dd1715c4f243eb62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:43:01.123986    9286 start.go:360] acquireMachinesLock for offline-docker-926000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:43:01.124019    9286 start.go:364] duration metric: took 23.292µs to acquireMachinesLock for "offline-docker-926000"
	I0318 13:43:01.124035    9286 start.go:93] Provisioning new machine with config: &{Name:offline-docker-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:43:01.124071    9286 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:43:01.128591    9286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 13:43:01.143651    9286 start.go:159] libmachine.API.Create for "offline-docker-926000" (driver="qemu2")
	I0318 13:43:01.143679    9286 client.go:168] LocalClient.Create starting
	I0318 13:43:01.143751    9286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:43:01.143786    9286 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:01.143794    9286 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:01.143840    9286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:43:01.143861    9286 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:01.143877    9286 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:01.144240    9286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:43:01.287560    9286 main.go:141] libmachine: Creating SSH key...
	I0318 13:43:01.438018    9286 main.go:141] libmachine: Creating Disk image...
	I0318 13:43:01.438029    9286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:43:01.438217    9286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2
	I0318 13:43:01.451108    9286 main.go:141] libmachine: STDOUT: 
	I0318 13:43:01.451136    9286 main.go:141] libmachine: STDERR: 
	I0318 13:43:01.451192    9286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2 +20000M
	I0318 13:43:01.463132    9286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:43:01.463161    9286 main.go:141] libmachine: STDERR: 
	I0318 13:43:01.463183    9286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2
	I0318 13:43:01.463188    9286 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:43:01.463218    9286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:73:7c:4e:d3:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2
	I0318 13:43:01.465207    9286 main.go:141] libmachine: STDOUT: 
	I0318 13:43:01.465223    9286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:43:01.465244    9286 client.go:171] duration metric: took 321.561834ms to LocalClient.Create
	I0318 13:43:03.466758    9286 start.go:128] duration metric: took 2.342692917s to createHost
	I0318 13:43:03.466774    9286 start.go:83] releasing machines lock for "offline-docker-926000", held for 2.342762833s
	W0318 13:43:03.466792    9286 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:03.475105    9286 out.go:177] * Deleting "offline-docker-926000" in qemu2 ...
	W0318 13:43:03.487285    9286 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:03.487297    9286 start.go:728] Will try again in 5 seconds ...
	I0318 13:43:08.489459    9286 start.go:360] acquireMachinesLock for offline-docker-926000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:43:08.489960    9286 start.go:364] duration metric: took 387.458µs to acquireMachinesLock for "offline-docker-926000"
	I0318 13:43:08.490098    9286 start.go:93] Provisioning new machine with config: &{Name:offline-docker-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:43:08.490432    9286 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:43:08.500277    9286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 13:43:08.550059    9286 start.go:159] libmachine.API.Create for "offline-docker-926000" (driver="qemu2")
	I0318 13:43:08.550159    9286 client.go:168] LocalClient.Create starting
	I0318 13:43:08.550332    9286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:43:08.550404    9286 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:08.550427    9286 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:08.550504    9286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:43:08.550552    9286 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:08.550563    9286 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:08.551128    9286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:43:08.704049    9286 main.go:141] libmachine: Creating SSH key...
	I0318 13:43:08.796587    9286 main.go:141] libmachine: Creating Disk image...
	I0318 13:43:08.796593    9286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:43:08.796786    9286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2
	I0318 13:43:08.809141    9286 main.go:141] libmachine: STDOUT: 
	I0318 13:43:08.809164    9286 main.go:141] libmachine: STDERR: 
	I0318 13:43:08.809215    9286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2 +20000M
	I0318 13:43:08.819945    9286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:43:08.819962    9286 main.go:141] libmachine: STDERR: 
	I0318 13:43:08.819974    9286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2
	I0318 13:43:08.819993    9286 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:43:08.820023    9286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6f:e2:63:f9:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/offline-docker-926000/disk.qcow2
	I0318 13:43:08.821669    9286 main.go:141] libmachine: STDOUT: 
	I0318 13:43:08.821685    9286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:43:08.821696    9286 client.go:171] duration metric: took 271.516625ms to LocalClient.Create
	I0318 13:43:10.823858    9286 start.go:128] duration metric: took 2.333394959s to createHost
	I0318 13:43:10.823912    9286 start.go:83] releasing machines lock for "offline-docker-926000", held for 2.333924417s
	W0318 13:43:10.824367    9286 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-926000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-926000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:10.837284    9286 out.go:177] 
	W0318 13:43:10.841440    9286 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:43:10.841475    9286 out.go:239] * 
	* 
	W0318 13:43:10.844820    9286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:43:10.852220    9286 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-926000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-18 13:43:10.867911 -0700 PDT m=+872.485438168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-926000 -n offline-docker-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-926000 -n offline-docker-926000: exit status 7 (67.681875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-926000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-926000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-926000
--- FAIL: TestOffline (10.03s)

                                                
                                    
x
+
TestAddons/Setup (10.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-980000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-980000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.23032575s)

                                                
                                                
-- stdout --
	* [addons-980000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-980000" primary control-plane node in "addons-980000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-980000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:30:40.652005    7492 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:30:40.652114    7492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:30:40.652117    7492 out.go:304] Setting ErrFile to fd 2...
	I0318 13:30:40.652119    7492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:30:40.652242    7492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:30:40.653398    7492 out.go:298] Setting JSON to false
	I0318 13:30:40.669381    7492 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5412,"bootTime":1710788428,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:30:40.669448    7492 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:30:40.673337    7492 out.go:177] * [addons-980000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:30:40.680354    7492 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:30:40.684322    7492 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:30:40.680415    7492 notify.go:220] Checking for updates...
	I0318 13:30:40.690241    7492 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:30:40.693311    7492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:30:40.696297    7492 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:30:40.699284    7492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:30:40.702497    7492 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:30:40.706291    7492 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:30:40.713296    7492 start.go:297] selected driver: qemu2
	I0318 13:30:40.713302    7492 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:30:40.713310    7492 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:30:40.715578    7492 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:30:40.718304    7492 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:30:40.719689    7492 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:30:40.719737    7492 cni.go:84] Creating CNI manager for ""
	I0318 13:30:40.719745    7492 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:30:40.719751    7492 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:30:40.719778    7492 start.go:340] cluster config:
	{Name:addons-980000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:30:40.724220    7492 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:30:40.732333    7492 out.go:177] * Starting "addons-980000" primary control-plane node in "addons-980000" cluster
	I0318 13:30:40.736305    7492 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:30:40.736322    7492 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:30:40.736335    7492 cache.go:56] Caching tarball of preloaded images
	I0318 13:30:40.736398    7492 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:30:40.736422    7492 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:30:40.736687    7492 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/addons-980000/config.json ...
	I0318 13:30:40.736700    7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/addons-980000/config.json: {Name:mka7cbba33c97692efeb72d255e81f33253aecb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:30:40.736926    7492 start.go:360] acquireMachinesLock for addons-980000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:30:40.737048    7492 start.go:364] duration metric: took 116.042µs to acquireMachinesLock for "addons-980000"
	I0318 13:30:40.737061    7492 start.go:93] Provisioning new machine with config: &{Name:addons-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:30:40.737088    7492 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:30:40.741347    7492 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 13:30:40.759737    7492 start.go:159] libmachine.API.Create for "addons-980000" (driver="qemu2")
	I0318 13:30:40.759772    7492 client.go:168] LocalClient.Create starting
	I0318 13:30:40.759899    7492 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:30:40.880759    7492 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:30:41.117744    7492 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:30:41.262887    7492 main.go:141] libmachine: Creating SSH key...
	I0318 13:30:41.324343    7492 main.go:141] libmachine: Creating Disk image...
	I0318 13:30:41.324349    7492 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:30:41.324545    7492 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2
	I0318 13:30:41.337001    7492 main.go:141] libmachine: STDOUT: 
	I0318 13:30:41.337029    7492 main.go:141] libmachine: STDERR: 
	I0318 13:30:41.337081    7492 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2 +20000M
	I0318 13:30:41.347690    7492 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:30:41.347718    7492 main.go:141] libmachine: STDERR: 
	I0318 13:30:41.347730    7492 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2
	I0318 13:30:41.347734    7492 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:30:41.347760    7492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:2e:5e:4d:8c:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2
	I0318 13:30:41.349510    7492 main.go:141] libmachine: STDOUT: 
	I0318 13:30:41.349526    7492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:30:41.349546    7492 client.go:171] duration metric: took 589.773041ms to LocalClient.Create
	I0318 13:30:43.351746    7492 start.go:128] duration metric: took 2.614651s to createHost
	I0318 13:30:43.351839    7492 start.go:83] releasing machines lock for "addons-980000", held for 2.614799833s
	W0318 13:30:43.351920    7492 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:30:43.367100    7492 out.go:177] * Deleting "addons-980000" in qemu2 ...
	W0318 13:30:43.393636    7492 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:30:43.393661    7492 start.go:728] Will try again in 5 seconds ...
	I0318 13:30:48.395917    7492 start.go:360] acquireMachinesLock for addons-980000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:30:48.396393    7492 start.go:364] duration metric: took 367µs to acquireMachinesLock for "addons-980000"
	I0318 13:30:48.396543    7492 start.go:93] Provisioning new machine with config: &{Name:addons-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:30:48.396853    7492 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:30:48.408565    7492 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 13:30:48.457403    7492 start.go:159] libmachine.API.Create for "addons-980000" (driver="qemu2")
	I0318 13:30:48.457455    7492 client.go:168] LocalClient.Create starting
	I0318 13:30:48.457560    7492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:30:48.457612    7492 main.go:141] libmachine: Decoding PEM data...
	I0318 13:30:48.457625    7492 main.go:141] libmachine: Parsing certificate...
	I0318 13:30:48.457687    7492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:30:48.457727    7492 main.go:141] libmachine: Decoding PEM data...
	I0318 13:30:48.457738    7492 main.go:141] libmachine: Parsing certificate...
	I0318 13:30:48.458272    7492 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:30:48.612896    7492 main.go:141] libmachine: Creating SSH key...
	I0318 13:30:48.782811    7492 main.go:141] libmachine: Creating Disk image...
	I0318 13:30:48.782817    7492 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:30:48.783023    7492 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2
	I0318 13:30:48.795363    7492 main.go:141] libmachine: STDOUT: 
	I0318 13:30:48.795385    7492 main.go:141] libmachine: STDERR: 
	I0318 13:30:48.795442    7492 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2 +20000M
	I0318 13:30:48.806266    7492 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:30:48.806285    7492 main.go:141] libmachine: STDERR: 
	I0318 13:30:48.806304    7492 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2
	I0318 13:30:48.806310    7492 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:30:48.806341    7492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:0f:9c:3d:d2:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/addons-980000/disk.qcow2
	I0318 13:30:48.807933    7492 main.go:141] libmachine: STDOUT: 
	I0318 13:30:48.807950    7492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:30:48.807963    7492 client.go:171] duration metric: took 350.503333ms to LocalClient.Create
	I0318 13:30:50.809166    7492 start.go:128] duration metric: took 2.412289208s to createHost
	I0318 13:30:50.809271    7492 start.go:83] releasing machines lock for "addons-980000", held for 2.412871167s
	W0318 13:30:50.809751    7492 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:30:50.820450    7492 out.go:177] 
	W0318 13:30:50.825446    7492 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:30:50.825476    7492 out.go:239] * 
	* 
	W0318 13:30:50.827937    7492 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:30:50.836375    7492 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-980000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.23s)

                                                
                                    
x
+
TestCertOptions (10.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-036000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-036000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.84833825s)

                                                
                                                
-- stdout --
	* [cert-options-036000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-036000" primary control-plane node in "cert-options-036000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-036000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-036000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-036000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-036000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-036000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.915208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-036000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-036000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-036000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-036000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-036000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-036000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (45.877125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-036000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-036000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-036000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-036000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-036000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-18 13:43:41.142803 -0700 PDT m=+902.760485043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-036000 -n cert-options-036000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-036000 -n cert-options-036000: exit status 7 (31.624541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-036000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-036000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-036000
--- FAIL: TestCertOptions (10.14s)

                                                
                                    
x
+
TestCertExpiration (195.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-526000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-526000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.828320875s)

                                                
                                                
-- stdout --
	* [cert-expiration-526000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-526000" primary control-plane node in "cert-expiration-526000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-526000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-526000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-526000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-526000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-526000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.233662792s)

                                                
                                                
-- stdout --
	* [cert-expiration-526000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-526000" primary control-plane node in "cert-expiration-526000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-526000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-526000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-526000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-526000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-526000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-526000" primary control-plane node in "cert-expiration-526000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-526000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-526000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-526000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-18 13:46:41.140613 -0700 PDT m=+1082.759214876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-526000 -n cert-expiration-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-526000 -n cert-expiration-526000: exit status 7 (69.663333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-526000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-526000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-526000
--- FAIL: TestCertExpiration (195.24s)

                                                
                                    
x
+
TestDockerFlags (10.02s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-563000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-563000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.755407916s)

                                                
                                                
-- stdout --
	* [docker-flags-563000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-563000" primary control-plane node in "docker-flags-563000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-563000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:43:21.150507    9482 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:43:21.150634    9482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:21.150637    9482 out.go:304] Setting ErrFile to fd 2...
	I0318 13:43:21.150640    9482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:21.150782    9482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:43:21.151875    9482 out.go:298] Setting JSON to false
	I0318 13:43:21.167830    9482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6173,"bootTime":1710788428,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:43:21.167894    9482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:43:21.174299    9482 out.go:177] * [docker-flags-563000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:43:21.181353    9482 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:43:21.185294    9482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:43:21.181391    9482 notify.go:220] Checking for updates...
	I0318 13:43:21.189323    9482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:43:21.192335    9482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:43:21.195334    9482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:43:21.198285    9482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:43:21.201636    9482 config.go:182] Loaded profile config "force-systemd-flag-570000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:43:21.201703    9482 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:43:21.201750    9482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:43:21.206236    9482 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:43:21.213281    9482 start.go:297] selected driver: qemu2
	I0318 13:43:21.213288    9482 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:43:21.213295    9482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:43:21.215585    9482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:43:21.220236    9482 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:43:21.223417    9482 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0318 13:43:21.223453    9482 cni.go:84] Creating CNI manager for ""
	I0318 13:43:21.223468    9482 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:43:21.223472    9482 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:43:21.223511    9482 start.go:340] cluster config:
	{Name:docker-flags-563000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:43:21.228273    9482 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:43:21.236249    9482 out.go:177] * Starting "docker-flags-563000" primary control-plane node in "docker-flags-563000" cluster
	I0318 13:43:21.240347    9482 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:43:21.240364    9482 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:43:21.240378    9482 cache.go:56] Caching tarball of preloaded images
	I0318 13:43:21.240446    9482 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:43:21.240452    9482 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:43:21.240520    9482 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/docker-flags-563000/config.json ...
	I0318 13:43:21.240534    9482 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/docker-flags-563000/config.json: {Name:mkd5e263edd716351b03bfa451aa90441d11e825 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:43:21.240761    9482 start.go:360] acquireMachinesLock for docker-flags-563000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:43:21.240799    9482 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "docker-flags-563000"
	I0318 13:43:21.240814    9482 start.go:93] Provisioning new machine with config: &{Name:docker-flags-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:43:21.240853    9482 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:43:21.248306    9482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 13:43:21.266384    9482 start.go:159] libmachine.API.Create for "docker-flags-563000" (driver="qemu2")
	I0318 13:43:21.266413    9482 client.go:168] LocalClient.Create starting
	I0318 13:43:21.266483    9482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:43:21.266514    9482 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:21.266525    9482 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:21.266573    9482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:43:21.266598    9482 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:21.266606    9482 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:21.266999    9482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:43:21.410825    9482 main.go:141] libmachine: Creating SSH key...
	I0318 13:43:21.475508    9482 main.go:141] libmachine: Creating Disk image...
	I0318 13:43:21.475514    9482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:43:21.475683    9482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2
	I0318 13:43:21.488101    9482 main.go:141] libmachine: STDOUT: 
	I0318 13:43:21.488118    9482 main.go:141] libmachine: STDERR: 
	I0318 13:43:21.488185    9482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2 +20000M
	I0318 13:43:21.499152    9482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:43:21.499167    9482 main.go:141] libmachine: STDERR: 
	I0318 13:43:21.499185    9482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2
	I0318 13:43:21.499190    9482 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:43:21.499225    9482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:5c:76:76:17:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2
	I0318 13:43:21.501144    9482 main.go:141] libmachine: STDOUT: 
	I0318 13:43:21.501158    9482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:43:21.501182    9482 client.go:171] duration metric: took 234.757791ms to LocalClient.Create
	I0318 13:43:23.503397    9482 start.go:128] duration metric: took 2.262534333s to createHost
	I0318 13:43:23.503455    9482 start.go:83] releasing machines lock for "docker-flags-563000", held for 2.262658291s
	W0318 13:43:23.503519    9482 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:23.528706    9482 out.go:177] * Deleting "docker-flags-563000" in qemu2 ...
	W0318 13:43:23.549620    9482 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:23.549638    9482 start.go:728] Will try again in 5 seconds ...
	I0318 13:43:28.551799    9482 start.go:360] acquireMachinesLock for docker-flags-563000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:43:28.553737    9482 start.go:364] duration metric: took 1.838625ms to acquireMachinesLock for "docker-flags-563000"
	I0318 13:43:28.553835    9482 start.go:93] Provisioning new machine with config: &{Name:docker-flags-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:43:28.554109    9482 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:43:28.561492    9482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 13:43:28.604568    9482 start.go:159] libmachine.API.Create for "docker-flags-563000" (driver="qemu2")
	I0318 13:43:28.604621    9482 client.go:168] LocalClient.Create starting
	I0318 13:43:28.604742    9482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:43:28.604798    9482 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:28.604815    9482 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:28.604876    9482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:43:28.604927    9482 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:28.604944    9482 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:28.605454    9482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:43:28.757993    9482 main.go:141] libmachine: Creating SSH key...
	I0318 13:43:28.795564    9482 main.go:141] libmachine: Creating Disk image...
	I0318 13:43:28.795570    9482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:43:28.795742    9482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2
	I0318 13:43:28.807811    9482 main.go:141] libmachine: STDOUT: 
	I0318 13:43:28.807833    9482 main.go:141] libmachine: STDERR: 
	I0318 13:43:28.807894    9482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2 +20000M
	I0318 13:43:28.818730    9482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:43:28.818751    9482 main.go:141] libmachine: STDERR: 
	I0318 13:43:28.818760    9482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2
	I0318 13:43:28.818766    9482 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:43:28.818798    9482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:9d:93:3a:cc:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/docker-flags-563000/disk.qcow2
	I0318 13:43:28.820577    9482 main.go:141] libmachine: STDOUT: 
	I0318 13:43:28.820597    9482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:43:28.820608    9482 client.go:171] duration metric: took 215.9835ms to LocalClient.Create
	I0318 13:43:30.822762    9482 start.go:128] duration metric: took 2.268639583s to createHost
	I0318 13:43:30.822835    9482 start.go:83] releasing machines lock for "docker-flags-563000", held for 2.269077208s
	W0318 13:43:30.823206    9482 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-563000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-563000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:30.840088    9482 out.go:177] 
	W0318 13:43:30.847070    9482 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:43:30.847117    9482 out.go:239] * 
	* 
	W0318 13:43:30.849937    9482 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:43:30.860750    9482 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-563000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-563000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-563000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (78.951292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-563000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-563000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-563000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-563000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-563000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-563000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-563000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-563000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-563000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.554583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-563000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-563000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-563000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-563000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-563000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-563000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-18 13:43:31.003931 -0700 PDT m=+892.621561001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-563000 -n docker-flags-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-563000 -n docker-flags-563000: exit status 7 (30.351416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-563000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-563000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-563000
--- FAIL: TestDockerFlags (10.02s)

                                                
                                    
x
+
TestForceSystemdFlag (10.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-570000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-570000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.002010333s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-570000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-570000" primary control-plane node in "force-systemd-flag-570000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:43:15.875363    9460 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:43:15.875546    9460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:15.875553    9460 out.go:304] Setting ErrFile to fd 2...
	I0318 13:43:15.875556    9460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:15.875801    9460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:43:15.877039    9460 out.go:298] Setting JSON to false
	I0318 13:43:15.893607    9460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6167,"bootTime":1710788428,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:43:15.893667    9460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:43:15.900086    9460 out.go:177] * [force-systemd-flag-570000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:43:15.906931    9460 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:43:15.906978    9460 notify.go:220] Checking for updates...
	I0318 13:43:15.911977    9460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:43:15.916006    9460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:43:15.919846    9460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:43:15.922914    9460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:43:15.925947    9460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:43:15.929235    9460 config.go:182] Loaded profile config "force-systemd-env-150000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:43:15.929305    9460 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:43:15.929363    9460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:43:15.933959    9460 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:43:15.939912    9460 start.go:297] selected driver: qemu2
	I0318 13:43:15.939919    9460 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:43:15.939925    9460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:43:15.942211    9460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:43:15.945041    9460 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:43:15.948024    9460 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 13:43:15.948062    9460 cni.go:84] Creating CNI manager for ""
	I0318 13:43:15.948068    9460 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:43:15.948076    9460 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:43:15.948102    9460 start.go:340] cluster config:
	{Name:force-systemd-flag-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-570000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:43:15.952481    9460 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:43:15.959936    9460 out.go:177] * Starting "force-systemd-flag-570000" primary control-plane node in "force-systemd-flag-570000" cluster
	I0318 13:43:15.963983    9460 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:43:15.964004    9460 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:43:15.964010    9460 cache.go:56] Caching tarball of preloaded images
	I0318 13:43:15.964077    9460 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:43:15.964082    9460 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:43:15.964137    9460 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/force-systemd-flag-570000/config.json ...
	I0318 13:43:15.964149    9460 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/force-systemd-flag-570000/config.json: {Name:mk74186eec6785f36ac5e5cdf9995f2e570b71f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:43:15.964481    9460 start.go:360] acquireMachinesLock for force-systemd-flag-570000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:43:15.964518    9460 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "force-systemd-flag-570000"
	I0318 13:43:15.964533    9460 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-570000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:43:15.964569    9460 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:43:15.969001    9460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 13:43:15.986410    9460 start.go:159] libmachine.API.Create for "force-systemd-flag-570000" (driver="qemu2")
	I0318 13:43:15.986432    9460 client.go:168] LocalClient.Create starting
	I0318 13:43:15.986487    9460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:43:15.986516    9460 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:15.986528    9460 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:15.986571    9460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:43:15.986598    9460 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:15.986607    9460 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:15.986994    9460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:43:16.142426    9460 main.go:141] libmachine: Creating SSH key...
	I0318 13:43:16.233164    9460 main.go:141] libmachine: Creating Disk image...
	I0318 13:43:16.233170    9460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:43:16.233335    9460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2
	I0318 13:43:16.245591    9460 main.go:141] libmachine: STDOUT: 
	I0318 13:43:16.245617    9460 main.go:141] libmachine: STDERR: 
	I0318 13:43:16.245676    9460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2 +20000M
	I0318 13:43:16.256329    9460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:43:16.256346    9460 main.go:141] libmachine: STDERR: 
	I0318 13:43:16.256360    9460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2
	I0318 13:43:16.256365    9460 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:43:16.256399    9460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:46:01:a8:12:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2
	I0318 13:43:16.258079    9460 main.go:141] libmachine: STDOUT: 
	I0318 13:43:16.258094    9460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:43:16.258119    9460 client.go:171] duration metric: took 271.683667ms to LocalClient.Create
	I0318 13:43:18.260343    9460 start.go:128] duration metric: took 2.295757416s to createHost
	I0318 13:43:18.260436    9460 start.go:83] releasing machines lock for "force-systemd-flag-570000", held for 2.295918208s
	W0318 13:43:18.260497    9460 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:18.270660    9460 out.go:177] * Deleting "force-systemd-flag-570000" in qemu2 ...
	W0318 13:43:18.299159    9460 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:18.299186    9460 start.go:728] Will try again in 5 seconds ...
	I0318 13:43:23.301320    9460 start.go:360] acquireMachinesLock for force-systemd-flag-570000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:43:23.503592    9460 start.go:364] duration metric: took 202.161417ms to acquireMachinesLock for "force-systemd-flag-570000"
	I0318 13:43:23.503745    9460 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-570000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:43:23.504011    9460 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:43:23.519597    9460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 13:43:23.567494    9460 start.go:159] libmachine.API.Create for "force-systemd-flag-570000" (driver="qemu2")
	I0318 13:43:23.567746    9460 client.go:168] LocalClient.Create starting
	I0318 13:43:23.567919    9460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:43:23.567987    9460 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:23.568005    9460 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:23.568078    9460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:43:23.568119    9460 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:23.568133    9460 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:23.568717    9460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:43:23.726320    9460 main.go:141] libmachine: Creating SSH key...
	I0318 13:43:23.767391    9460 main.go:141] libmachine: Creating Disk image...
	I0318 13:43:23.767397    9460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:43:23.767568    9460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2
	I0318 13:43:23.779754    9460 main.go:141] libmachine: STDOUT: 
	I0318 13:43:23.779773    9460 main.go:141] libmachine: STDERR: 
	I0318 13:43:23.779820    9460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2 +20000M
	I0318 13:43:23.790487    9460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:43:23.790504    9460 main.go:141] libmachine: STDERR: 
	I0318 13:43:23.790515    9460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2
	I0318 13:43:23.790519    9460 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:43:23.790549    9460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:ca:0d:f6:01:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-flag-570000/disk.qcow2
	I0318 13:43:23.792267    9460 main.go:141] libmachine: STDOUT: 
	I0318 13:43:23.792283    9460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:43:23.792297    9460 client.go:171] duration metric: took 224.531042ms to LocalClient.Create
	I0318 13:43:25.794463    9460 start.go:128] duration metric: took 2.29044025s to createHost
	I0318 13:43:25.794513    9460 start.go:83] releasing machines lock for "force-systemd-flag-570000", held for 2.290831667s
	W0318 13:43:25.794817    9460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:25.813631    9460 out.go:177] 
	W0318 13:43:25.820537    9460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:43:25.820566    9460 out.go:239] * 
	* 
	W0318 13:43:25.823026    9460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:43:25.833417    9460 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-570000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-570000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-570000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (84.722875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-570000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-570000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-570000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-18 13:43:25.936683 -0700 PDT m=+887.554287335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-570000 -n force-systemd-flag-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-570000 -n force-systemd-flag-570000: exit status 7 (35.043417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-570000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-570000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-570000
--- FAIL: TestForceSystemdFlag (10.23s)

                                                
                                    
x
+
TestForceSystemdEnv (10.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-150000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-150000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.868373334s)

                                                
                                                
-- stdout --
	* [force-systemd-env-150000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-150000" primary control-plane node in "force-systemd-env-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:43:11.066980    9426 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:43:11.067173    9426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:11.067180    9426 out.go:304] Setting ErrFile to fd 2...
	I0318 13:43:11.067183    9426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:11.067315    9426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:43:11.068308    9426 out.go:298] Setting JSON to false
	I0318 13:43:11.084668    9426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6163,"bootTime":1710788428,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:43:11.084735    9426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:43:11.089382    9426 out.go:177] * [force-systemd-env-150000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:43:11.100196    9426 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:43:11.095293    9426 notify.go:220] Checking for updates...
	I0318 13:43:11.108169    9426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:43:11.116040    9426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:43:11.124212    9426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:43:11.131225    9426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:43:11.138170    9426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0318 13:43:11.142525    9426 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:43:11.142569    9426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:43:11.146258    9426 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:43:11.153200    9426 start.go:297] selected driver: qemu2
	I0318 13:43:11.153209    9426 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:43:11.153214    9426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:43:11.155552    9426 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:43:11.159262    9426 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:43:11.163235    9426 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 13:43:11.163277    9426 cni.go:84] Creating CNI manager for ""
	I0318 13:43:11.163285    9426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:43:11.163289    9426 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:43:11.163312    9426 start.go:340] cluster config:
	{Name:force-systemd-env-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:43:11.167867    9426 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:43:11.169979    9426 out.go:177] * Starting "force-systemd-env-150000" primary control-plane node in "force-systemd-env-150000" cluster
	I0318 13:43:11.178217    9426 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:43:11.178232    9426 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:43:11.178240    9426 cache.go:56] Caching tarball of preloaded images
	I0318 13:43:11.178297    9426 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:43:11.178303    9426 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:43:11.178360    9426 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/force-systemd-env-150000/config.json ...
	I0318 13:43:11.178372    9426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/force-systemd-env-150000/config.json: {Name:mk78123091eda168bcd2713eae5dbce6ff4f8b96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:43:11.178720    9426 start.go:360] acquireMachinesLock for force-systemd-env-150000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:43:11.178760    9426 start.go:364] duration metric: took 27µs to acquireMachinesLock for "force-systemd-env-150000"
	I0318 13:43:11.178773    9426 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:43:11.178800    9426 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:43:11.187234    9426 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 13:43:11.204016    9426 start.go:159] libmachine.API.Create for "force-systemd-env-150000" (driver="qemu2")
	I0318 13:43:11.204037    9426 client.go:168] LocalClient.Create starting
	I0318 13:43:11.204103    9426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:43:11.204139    9426 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:11.204149    9426 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:11.204189    9426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:43:11.204212    9426 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:11.204219    9426 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:11.204573    9426 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:43:11.348142    9426 main.go:141] libmachine: Creating SSH key...
	I0318 13:43:11.455907    9426 main.go:141] libmachine: Creating Disk image...
	I0318 13:43:11.455915    9426 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:43:11.456089    9426 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2
	I0318 13:43:11.468289    9426 main.go:141] libmachine: STDOUT: 
	I0318 13:43:11.468312    9426 main.go:141] libmachine: STDERR: 
	I0318 13:43:11.468367    9426 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2 +20000M
	I0318 13:43:11.479640    9426 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:43:11.479661    9426 main.go:141] libmachine: STDERR: 
	I0318 13:43:11.479696    9426 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2
	I0318 13:43:11.479703    9426 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:43:11.479742    9426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:51:5c:f0:0d:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2
	I0318 13:43:11.481456    9426 main.go:141] libmachine: STDOUT: 
	I0318 13:43:11.481473    9426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:43:11.481491    9426 client.go:171] duration metric: took 277.45125ms to LocalClient.Create
	I0318 13:43:13.483832    9426 start.go:128] duration metric: took 2.305016041s to createHost
	I0318 13:43:13.483914    9426 start.go:83] releasing machines lock for "force-systemd-env-150000", held for 2.305155667s
	W0318 13:43:13.483996    9426 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:13.491201    9426 out.go:177] * Deleting "force-systemd-env-150000" in qemu2 ...
	W0318 13:43:13.517635    9426 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:13.517666    9426 start.go:728] Will try again in 5 seconds ...
	I0318 13:43:18.519789    9426 start.go:360] acquireMachinesLock for force-systemd-env-150000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:43:18.520139    9426 start.go:364] duration metric: took 278.542µs to acquireMachinesLock for "force-systemd-env-150000"
	I0318 13:43:18.520247    9426 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:43:18.520547    9426 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:43:18.530950    9426 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 13:43:18.579439    9426 start.go:159] libmachine.API.Create for "force-systemd-env-150000" (driver="qemu2")
	I0318 13:43:18.579481    9426 client.go:168] LocalClient.Create starting
	I0318 13:43:18.579591    9426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:43:18.579654    9426 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:18.579671    9426 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:18.579740    9426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:43:18.579782    9426 main.go:141] libmachine: Decoding PEM data...
	I0318 13:43:18.579796    9426 main.go:141] libmachine: Parsing certificate...
	I0318 13:43:18.580333    9426 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:43:18.737201    9426 main.go:141] libmachine: Creating SSH key...
	I0318 13:43:18.832782    9426 main.go:141] libmachine: Creating Disk image...
	I0318 13:43:18.832788    9426 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:43:18.832965    9426 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2
	I0318 13:43:18.845338    9426 main.go:141] libmachine: STDOUT: 
	I0318 13:43:18.845371    9426 main.go:141] libmachine: STDERR: 
	I0318 13:43:18.845432    9426 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2 +20000M
	I0318 13:43:18.856105    9426 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:43:18.856125    9426 main.go:141] libmachine: STDERR: 
	I0318 13:43:18.856137    9426 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2
	I0318 13:43:18.856141    9426 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:43:18.856176    9426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:d6:6d:78:d1:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/force-systemd-env-150000/disk.qcow2
	I0318 13:43:18.857902    9426 main.go:141] libmachine: STDOUT: 
	I0318 13:43:18.857924    9426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:43:18.857940    9426 client.go:171] duration metric: took 278.455541ms to LocalClient.Create
	I0318 13:43:20.860104    9426 start.go:128] duration metric: took 2.339536958s to createHost
	I0318 13:43:20.860159    9426 start.go:83] releasing machines lock for "force-systemd-env-150000", held for 2.340009583s
	W0318 13:43:20.860567    9426 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:43:20.873081    9426 out.go:177] 
	W0318 13:43:20.877250    9426 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:43:20.877288    9426 out.go:239] * 
	* 
	W0318 13:43:20.879680    9426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:43:20.889163    9426 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-150000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-150000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-150000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.090875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-150000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-150000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-150000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-18 13:43:20.984104 -0700 PDT m=+882.601682960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-150000 -n force-systemd-env-150000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-150000 -n force-systemd-env-150000: exit status 7 (35.437625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-150000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-150000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-150000
--- FAIL: TestForceSystemdEnv (10.09s)

                                                
                                    
x
+
TestErrorSpam/setup (9.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-652000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-652000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 --driver=qemu2 : exit status 80 (9.903954209s)

                                                
                                                
-- stdout --
	* [nospam-652000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-652000" primary control-plane node in "nospam-652000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-652000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-652000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-652000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18421
- KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-652000" primary control-plane node in "nospam-652000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-652000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.91s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-229000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-229000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.980713166s)

                                                
                                                
-- stdout --
	* [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-229000" primary control-plane node in "functional-229000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-229000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50963 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50963 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50963 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-229000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-229000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18421
- KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-229000" primary control-plane node in "functional-229000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-229000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50963 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50963 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50963 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-229000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (68.639791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.05s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-229000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-229000 --alsologtostderr -v=8: exit status 80 (5.188237917s)

                                                
                                                
-- stdout --
	* [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-229000" primary control-plane node in "functional-229000" cluster
	* Restarting existing qemu2 VM for "functional-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:31:20.904141    7658 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:31:20.904267    7658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:31:20.904270    7658 out.go:304] Setting ErrFile to fd 2...
	I0318 13:31:20.904273    7658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:31:20.904398    7658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:31:20.905398    7658 out.go:298] Setting JSON to false
	I0318 13:31:20.921653    7658 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5452,"bootTime":1710788428,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:31:20.921713    7658 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:31:20.926543    7658 out.go:177] * [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:31:20.933492    7658 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:31:20.937385    7658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:31:20.933557    7658 notify.go:220] Checking for updates...
	I0318 13:31:20.941496    7658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:31:20.944502    7658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:31:20.947488    7658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:31:20.950440    7658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:31:20.953804    7658 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:31:20.953852    7658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:31:20.958470    7658 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:31:20.965468    7658 start.go:297] selected driver: qemu2
	I0318 13:31:20.965474    7658 start.go:901] validating driver "qemu2" against &{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:31:20.965518    7658 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:31:20.967723    7658 cni.go:84] Creating CNI manager for ""
	I0318 13:31:20.967742    7658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:31:20.967795    7658 start.go:340] cluster config:
	{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:31:20.972140    7658 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:31:20.980499    7658 out.go:177] * Starting "functional-229000" primary control-plane node in "functional-229000" cluster
	I0318 13:31:20.984515    7658 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:31:20.984533    7658 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:31:20.984550    7658 cache.go:56] Caching tarball of preloaded images
	I0318 13:31:20.984605    7658 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:31:20.984611    7658 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:31:20.984669    7658 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/functional-229000/config.json ...
	I0318 13:31:20.985140    7658 start.go:360] acquireMachinesLock for functional-229000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:31:20.985167    7658 start.go:364] duration metric: took 21.334µs to acquireMachinesLock for "functional-229000"
	I0318 13:31:20.985177    7658 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:31:20.985184    7658 fix.go:54] fixHost starting: 
	I0318 13:31:20.985317    7658 fix.go:112] recreateIfNeeded on functional-229000: state=Stopped err=<nil>
	W0318 13:31:20.985326    7658 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:31:20.993483    7658 out.go:177] * Restarting existing qemu2 VM for "functional-229000" ...
	I0318 13:31:20.996566    7658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:28:e9:a1:8c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/disk.qcow2
	I0318 13:31:20.998692    7658 main.go:141] libmachine: STDOUT: 
	I0318 13:31:20.998712    7658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:31:20.998745    7658 fix.go:56] duration metric: took 13.5615ms for fixHost
	I0318 13:31:20.998752    7658 start.go:83] releasing machines lock for "functional-229000", held for 13.580042ms
	W0318 13:31:20.998759    7658 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:31:20.998804    7658 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:31:20.998810    7658 start.go:728] Will try again in 5 seconds ...
	I0318 13:31:26.001051    7658 start.go:360] acquireMachinesLock for functional-229000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:31:26.001498    7658 start.go:364] duration metric: took 323.209µs to acquireMachinesLock for "functional-229000"
	I0318 13:31:26.001665    7658 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:31:26.001688    7658 fix.go:54] fixHost starting: 
	I0318 13:31:26.002408    7658 fix.go:112] recreateIfNeeded on functional-229000: state=Stopped err=<nil>
	W0318 13:31:26.002438    7658 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:31:26.007811    7658 out.go:177] * Restarting existing qemu2 VM for "functional-229000" ...
	I0318 13:31:26.014938    7658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:28:e9:a1:8c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/disk.qcow2
	I0318 13:31:26.025094    7658 main.go:141] libmachine: STDOUT: 
	I0318 13:31:26.025168    7658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:31:26.025274    7658 fix.go:56] duration metric: took 23.586208ms for fixHost
	I0318 13:31:26.025300    7658 start.go:83] releasing machines lock for "functional-229000", held for 23.748458ms
	W0318 13:31:26.025572    7658 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-229000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-229000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:31:26.032852    7658 out.go:177] 
	W0318 13:31:26.036898    7658 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:31:26.036926    7658 out.go:239] * 
	* 
	W0318 13:31:26.039723    7658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:31:26.045858    7658 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-229000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.189899083s for "functional-229000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (68.873916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.366125ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-229000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (32.282667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-229000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-229000 get po -A: exit status 1 (26.605167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-229000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-229000\n"*: args "kubectl --context functional-229000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-229000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (32.610625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh sudo crictl images: exit status 83 (43.029791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-229000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.816375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-229000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.883625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.980667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-229000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 kubectl -- --context functional-229000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 kubectl -- --context functional-229000 get pods: exit status 1 (525.313334ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-229000
	* no server found for cluster "functional-229000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-229000 kubectl -- --context functional-229000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (34.507167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-229000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-229000 get pods: exit status 1 (685.780583ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-229000
	* no server found for cluster "functional-229000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-229000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (31.71325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-229000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-229000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.189855833s)

                                                
                                                
-- stdout --
	* [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-229000" primary control-plane node in "functional-229000" cluster
	* Restarting existing qemu2 VM for "functional-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-229000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-229000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-229000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.190433958s for "functional-229000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (69.797708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-229000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-229000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.234375ms)

                                                
                                                
** stderr ** 
	error: context "functional-229000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-229000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (32.230209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 logs: exit status 83 (81.883917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:28 PDT |                     |
	|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| start   | -o=json --download-only                                                  | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT |                     |
	|         | -p download-only-051000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| delete  | -p download-only-051000                                                  | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| start   | -o=json --download-only                                                  | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT |                     |
	|         | -p download-only-387000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
	| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
	| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
	| delete  | -p download-only-051000                                                  | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
	| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
	| start   | --download-only -p                                                       | binary-mirror-417000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
	|         | binary-mirror-417000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50931                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-417000                                                  | binary-mirror-417000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
	| addons  | disable dashboard -p                                                     | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
	|         | addons-980000                                                            |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
	|         | addons-980000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-980000 --wait=true                                             | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-980000                                                         | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
	| start   | -p nospam-652000 -n=1 --memory=2250 --wait=false                         | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-652000                                                         | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	| start   | -p functional-229000                                                     | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-229000                                                     | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | minikube-local-cache-test:functional-229000                              |                      |         |         |                     |                     |
	| cache   | functional-229000 cache delete                                           | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | minikube-local-cache-test:functional-229000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	| ssh     | functional-229000 ssh sudo                                               | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-229000                                                        | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-229000 ssh                                                    | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-229000 cache reload                                           | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	| ssh     | functional-229000 ssh                                                    | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-229000 kubectl --                                             | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | --context functional-229000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-229000                                                     | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:31:35
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:31:35.416366    7748 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:31:35.416483    7748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:31:35.416485    7748 out.go:304] Setting ErrFile to fd 2...
	I0318 13:31:35.416486    7748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:31:35.416609    7748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:31:35.417611    7748 out.go:298] Setting JSON to false
	I0318 13:31:35.433531    7748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5467,"bootTime":1710788428,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:31:35.433611    7748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:31:35.438401    7748 out.go:177] * [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:31:35.448297    7748 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:31:35.452365    7748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:31:35.448331    7748 notify.go:220] Checking for updates...
	I0318 13:31:35.459315    7748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:31:35.462343    7748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:31:35.465338    7748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:31:35.468334    7748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:31:35.471690    7748 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:31:35.471750    7748 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:31:35.475259    7748 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:31:35.482297    7748 start.go:297] selected driver: qemu2
	I0318 13:31:35.482300    7748 start.go:901] validating driver "qemu2" against &{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:31:35.482346    7748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:31:35.484570    7748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:31:35.484603    7748 cni.go:84] Creating CNI manager for ""
	I0318 13:31:35.484612    7748 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:31:35.484657    7748 start.go:340] cluster config:
	{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:31:35.489104    7748 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:31:35.497309    7748 out.go:177] * Starting "functional-229000" primary control-plane node in "functional-229000" cluster
	I0318 13:31:35.501257    7748 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:31:35.501268    7748 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:31:35.501276    7748 cache.go:56] Caching tarball of preloaded images
	I0318 13:31:35.501324    7748 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:31:35.501328    7748 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:31:35.501380    7748 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/functional-229000/config.json ...
	I0318 13:31:35.501808    7748 start.go:360] acquireMachinesLock for functional-229000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:31:35.501843    7748 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "functional-229000"
	I0318 13:31:35.501851    7748 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:31:35.501854    7748 fix.go:54] fixHost starting: 
	I0318 13:31:35.501967    7748 fix.go:112] recreateIfNeeded on functional-229000: state=Stopped err=<nil>
	W0318 13:31:35.501977    7748 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:31:35.510359    7748 out.go:177] * Restarting existing qemu2 VM for "functional-229000" ...
	I0318 13:31:35.514239    7748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:28:e9:a1:8c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/disk.qcow2
	I0318 13:31:35.516247    7748 main.go:141] libmachine: STDOUT: 
	I0318 13:31:35.516264    7748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:31:35.516290    7748 fix.go:56] duration metric: took 14.434333ms for fixHost
	I0318 13:31:35.516293    7748 start.go:83] releasing machines lock for "functional-229000", held for 14.44725ms
	W0318 13:31:35.516299    7748 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:31:35.516322    7748 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:31:35.516326    7748 start.go:728] Will try again in 5 seconds ...
	I0318 13:31:40.518454    7748 start.go:360] acquireMachinesLock for functional-229000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:31:40.518819    7748 start.go:364] duration metric: took 294.083µs to acquireMachinesLock for "functional-229000"
	I0318 13:31:40.518979    7748 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:31:40.518992    7748 fix.go:54] fixHost starting: 
	I0318 13:31:40.519662    7748 fix.go:112] recreateIfNeeded on functional-229000: state=Stopped err=<nil>
	W0318 13:31:40.519683    7748 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:31:40.527202    7748 out.go:177] * Restarting existing qemu2 VM for "functional-229000" ...
	I0318 13:31:40.530528    7748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:28:e9:a1:8c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/disk.qcow2
	I0318 13:31:40.540291    7748 main.go:141] libmachine: STDOUT: 
	I0318 13:31:40.540339    7748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:31:40.540409    7748 fix.go:56] duration metric: took 21.418791ms for fixHost
	I0318 13:31:40.540424    7748 start.go:83] releasing machines lock for "functional-229000", held for 21.563709ms
	W0318 13:31:40.540611    7748 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-229000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:31:40.547196    7748 out.go:177] 
	W0318 13:31:40.551212    7748 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:31:40.551238    7748 out.go:239] * 
	W0318 13:31:40.553597    7748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:31:40.561975    7748 out.go:177] 
	
	
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-229000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:28 PDT |                     |
|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
| start   | -o=json --download-only                                                  | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT |                     |
|         | -p download-only-051000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
| delete  | -p download-only-051000                                                  | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
| start   | -o=json --download-only                                                  | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT |                     |
|         | -p download-only-387000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| delete  | -p download-only-051000                                                  | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| start   | --download-only -p                                                       | binary-mirror-417000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | binary-mirror-417000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50931                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-417000                                                  | binary-mirror-417000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| addons  | disable dashboard -p                                                     | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | addons-980000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | addons-980000                                                            |                      |         |         |                     |                     |
| start   | -p addons-980000 --wait=true                                             | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-980000                                                         | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| start   | -p nospam-652000 -n=1 --memory=2250 --wait=false                         | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-652000                                                         | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
| start   | -p functional-229000                                                     | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-229000                                                     | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | minikube-local-cache-test:functional-229000                              |                      |         |         |                     |                     |
| cache   | functional-229000 cache delete                                           | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | minikube-local-cache-test:functional-229000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
| ssh     | functional-229000 ssh sudo                                               | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-229000                                                        | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-229000 ssh                                                    | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-229000 cache reload                                           | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
| ssh     | functional-229000 ssh                                                    | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-229000 kubectl --                                             | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | --context functional-229000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-229000                                                     | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/18 13:31:35
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0318 13:31:35.416366    7748 out.go:291] Setting OutFile to fd 1 ...
I0318 13:31:35.416483    7748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:31:35.416485    7748 out.go:304] Setting ErrFile to fd 2...
I0318 13:31:35.416486    7748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:31:35.416609    7748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:31:35.417611    7748 out.go:298] Setting JSON to false
I0318 13:31:35.433531    7748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5467,"bootTime":1710788428,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0318 13:31:35.433611    7748 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0318 13:31:35.438401    7748 out.go:177] * [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0318 13:31:35.448297    7748 out.go:177]   - MINIKUBE_LOCATION=18421
I0318 13:31:35.452365    7748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
I0318 13:31:35.448331    7748 notify.go:220] Checking for updates...
I0318 13:31:35.459315    7748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0318 13:31:35.462343    7748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0318 13:31:35.465338    7748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
I0318 13:31:35.468334    7748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0318 13:31:35.471690    7748 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:31:35.471750    7748 driver.go:392] Setting default libvirt URI to qemu:///system
I0318 13:31:35.475259    7748 out.go:177] * Using the qemu2 driver based on existing profile
I0318 13:31:35.482297    7748 start.go:297] selected driver: qemu2
I0318 13:31:35.482300    7748 start.go:901] validating driver "qemu2" against &{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 13:31:35.482346    7748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0318 13:31:35.484570    7748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0318 13:31:35.484603    7748 cni.go:84] Creating CNI manager for ""
I0318 13:31:35.484612    7748 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0318 13:31:35.484657    7748 start.go:340] cluster config:
{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 13:31:35.489104    7748 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0318 13:31:35.497309    7748 out.go:177] * Starting "functional-229000" primary control-plane node in "functional-229000" cluster
I0318 13:31:35.501257    7748 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0318 13:31:35.501268    7748 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0318 13:31:35.501276    7748 cache.go:56] Caching tarball of preloaded images
I0318 13:31:35.501324    7748 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0318 13:31:35.501328    7748 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0318 13:31:35.501380    7748 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/functional-229000/config.json ...
I0318 13:31:35.501808    7748 start.go:360] acquireMachinesLock for functional-229000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 13:31:35.501843    7748 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "functional-229000"
I0318 13:31:35.501851    7748 start.go:96] Skipping create...Using existing machine configuration
I0318 13:31:35.501854    7748 fix.go:54] fixHost starting: 
I0318 13:31:35.501967    7748 fix.go:112] recreateIfNeeded on functional-229000: state=Stopped err=<nil>
W0318 13:31:35.501977    7748 fix.go:138] unexpected machine state, will restart: <nil>
I0318 13:31:35.510359    7748 out.go:177] * Restarting existing qemu2 VM for "functional-229000" ...
I0318 13:31:35.514239    7748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:28:e9:a1:8c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/disk.qcow2
I0318 13:31:35.516247    7748 main.go:141] libmachine: STDOUT: 
I0318 13:31:35.516264    7748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 13:31:35.516290    7748 fix.go:56] duration metric: took 14.434333ms for fixHost
I0318 13:31:35.516293    7748 start.go:83] releasing machines lock for "functional-229000", held for 14.44725ms
W0318 13:31:35.516299    7748 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 13:31:35.516322    7748 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 13:31:35.516326    7748 start.go:728] Will try again in 5 seconds ...
I0318 13:31:40.518454    7748 start.go:360] acquireMachinesLock for functional-229000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 13:31:40.518819    7748 start.go:364] duration metric: took 294.083µs to acquireMachinesLock for "functional-229000"
I0318 13:31:40.518979    7748 start.go:96] Skipping create...Using existing machine configuration
I0318 13:31:40.518992    7748 fix.go:54] fixHost starting: 
I0318 13:31:40.519662    7748 fix.go:112] recreateIfNeeded on functional-229000: state=Stopped err=<nil>
W0318 13:31:40.519683    7748 fix.go:138] unexpected machine state, will restart: <nil>
I0318 13:31:40.527202    7748 out.go:177] * Restarting existing qemu2 VM for "functional-229000" ...
I0318 13:31:40.530528    7748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:28:e9:a1:8c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/disk.qcow2
I0318 13:31:40.540291    7748 main.go:141] libmachine: STDOUT: 
I0318 13:31:40.540339    7748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 13:31:40.540409    7748 fix.go:56] duration metric: took 21.418791ms for fixHost
I0318 13:31:40.540424    7748 start.go:83] releasing machines lock for "functional-229000", held for 21.563709ms
W0318 13:31:40.540611    7748 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-229000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 13:31:40.547196    7748 out.go:177] 
W0318 13:31:40.551212    7748 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 13:31:40.551238    7748 out.go:239] * 
W0318 13:31:40.553597    7748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 13:31:40.561975    7748 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3278538910/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:28 PDT |                     |
|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
| start   | -o=json --download-only                                                  | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT |                     |
|         | -p download-only-051000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
| delete  | -p download-only-051000                                                  | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
| start   | -o=json --download-only                                                  | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT |                     |
|         | -p download-only-387000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| delete  | -p download-only-051000                                                  | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| delete  | -p download-only-387000                                                  | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| start   | --download-only -p                                                       | binary-mirror-417000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | binary-mirror-417000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50931                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-417000                                                  | binary-mirror-417000 | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| addons  | disable dashboard -p                                                     | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | addons-980000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | addons-980000                                                            |                      |         |         |                     |                     |
| start   | -p addons-980000 --wait=true                                             | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-980000                                                         | addons-980000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT | 18 Mar 24 13:30 PDT |
| start   | -p nospam-652000 -n=1 --memory=2250 --wait=false                         | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:30 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-652000 --log_dir                                                  | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-652000                                                         | nospam-652000        | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
| start   | -p functional-229000                                                     | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-229000                                                     | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-229000 cache add                                              | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | minikube-local-cache-test:functional-229000                              |                      |         |         |                     |                     |
| cache   | functional-229000 cache delete                                           | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | minikube-local-cache-test:functional-229000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
| ssh     | functional-229000 ssh sudo                                               | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-229000                                                        | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-229000 ssh                                                    | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-229000 cache reload                                           | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
| ssh     | functional-229000 ssh                                                    | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT | 18 Mar 24 13:31 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-229000 kubectl --                                             | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | --context functional-229000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-229000                                                     | functional-229000    | jenkins | v1.32.0 | 18 Mar 24 13:31 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/18 13:31:35
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0318 13:31:35.416366    7748 out.go:291] Setting OutFile to fd 1 ...
I0318 13:31:35.416483    7748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:31:35.416485    7748 out.go:304] Setting ErrFile to fd 2...
I0318 13:31:35.416486    7748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:31:35.416609    7748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:31:35.417611    7748 out.go:298] Setting JSON to false
I0318 13:31:35.433531    7748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5467,"bootTime":1710788428,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0318 13:31:35.433611    7748 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0318 13:31:35.438401    7748 out.go:177] * [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0318 13:31:35.448297    7748 out.go:177]   - MINIKUBE_LOCATION=18421
I0318 13:31:35.452365    7748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
I0318 13:31:35.448331    7748 notify.go:220] Checking for updates...
I0318 13:31:35.459315    7748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0318 13:31:35.462343    7748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0318 13:31:35.465338    7748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
I0318 13:31:35.468334    7748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0318 13:31:35.471690    7748 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:31:35.471750    7748 driver.go:392] Setting default libvirt URI to qemu:///system
I0318 13:31:35.475259    7748 out.go:177] * Using the qemu2 driver based on existing profile
I0318 13:31:35.482297    7748 start.go:297] selected driver: qemu2
I0318 13:31:35.482300    7748 start.go:901] validating driver "qemu2" against &{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 13:31:35.482346    7748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0318 13:31:35.484570    7748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0318 13:31:35.484603    7748 cni.go:84] Creating CNI manager for ""
I0318 13:31:35.484612    7748 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0318 13:31:35.484657    7748 start.go:340] cluster config:
{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 13:31:35.489104    7748 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0318 13:31:35.497309    7748 out.go:177] * Starting "functional-229000" primary control-plane node in "functional-229000" cluster
I0318 13:31:35.501257    7748 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0318 13:31:35.501268    7748 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0318 13:31:35.501276    7748 cache.go:56] Caching tarball of preloaded images
I0318 13:31:35.501324    7748 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0318 13:31:35.501328    7748 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0318 13:31:35.501380    7748 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/functional-229000/config.json ...
I0318 13:31:35.501808    7748 start.go:360] acquireMachinesLock for functional-229000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 13:31:35.501843    7748 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "functional-229000"
I0318 13:31:35.501851    7748 start.go:96] Skipping create...Using existing machine configuration
I0318 13:31:35.501854    7748 fix.go:54] fixHost starting: 
I0318 13:31:35.501967    7748 fix.go:112] recreateIfNeeded on functional-229000: state=Stopped err=<nil>
W0318 13:31:35.501977    7748 fix.go:138] unexpected machine state, will restart: <nil>
I0318 13:31:35.510359    7748 out.go:177] * Restarting existing qemu2 VM for "functional-229000" ...
I0318 13:31:35.514239    7748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:28:e9:a1:8c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/disk.qcow2
I0318 13:31:35.516247    7748 main.go:141] libmachine: STDOUT: 
I0318 13:31:35.516264    7748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 13:31:35.516290    7748 fix.go:56] duration metric: took 14.434333ms for fixHost
I0318 13:31:35.516293    7748 start.go:83] releasing machines lock for "functional-229000", held for 14.44725ms
W0318 13:31:35.516299    7748 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 13:31:35.516322    7748 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 13:31:35.516326    7748 start.go:728] Will try again in 5 seconds ...
I0318 13:31:40.518454    7748 start.go:360] acquireMachinesLock for functional-229000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 13:31:40.518819    7748 start.go:364] duration metric: took 294.083µs to acquireMachinesLock for "functional-229000"
I0318 13:31:40.518979    7748 start.go:96] Skipping create...Using existing machine configuration
I0318 13:31:40.518992    7748 fix.go:54] fixHost starting: 
I0318 13:31:40.519662    7748 fix.go:112] recreateIfNeeded on functional-229000: state=Stopped err=<nil>
W0318 13:31:40.519683    7748 fix.go:138] unexpected machine state, will restart: <nil>
I0318 13:31:40.527202    7748 out.go:177] * Restarting existing qemu2 VM for "functional-229000" ...
I0318 13:31:40.530528    7748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:28:e9:a1:8c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/functional-229000/disk.qcow2
I0318 13:31:40.540291    7748 main.go:141] libmachine: STDOUT: 
I0318 13:31:40.540339    7748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 13:31:40.540409    7748 fix.go:56] duration metric: took 21.418791ms for fixHost
I0318 13:31:40.540424    7748 start.go:83] releasing machines lock for "functional-229000", held for 21.563709ms
W0318 13:31:40.540611    7748 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-229000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 13:31:40.547196    7748 out.go:177] 
W0318 13:31:40.551212    7748 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 13:31:40.551238    7748 out.go:239] * 
W0318 13:31:40.553597    7748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 13:31:40.561975    7748 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-229000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-229000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.51375ms)

                                                
                                                
** stderr ** 
	error: context "functional-229000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-229000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-229000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-229000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-229000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-229000 --alsologtostderr -v=1] stderr:
I0318 13:32:34.221951    8119 out.go:291] Setting OutFile to fd 1 ...
I0318 13:32:34.222349    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.222354    8119 out.go:304] Setting ErrFile to fd 2...
I0318 13:32:34.222356    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.222572    8119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:32:34.222762    8119 mustload.go:65] Loading cluster: functional-229000
I0318 13:32:34.222944    8119 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:32:34.227454    8119 out.go:177] * The control-plane node functional-229000 host is not running: state=Stopped
I0318 13:32:34.231443    8119 out.go:177]   To start a cluster, run: "minikube start -p functional-229000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (44.584542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 status: exit status 7 (31.746541ms)

                                                
                                                
-- stdout --
	functional-229000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-229000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.111ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-229000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 status -o json: exit status 7 (31.7085ms)

                                                
                                                
-- stdout --
	{"Name":"functional-229000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-229000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (31.761458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-229000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-229000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.123333ms)

                                                
                                                
** stderr ** 
	error: context "functional-229000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-229000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-229000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-229000 describe po hello-node-connect: exit status 1 (26.871375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-229000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-229000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-229000 logs -l app=hello-node-connect: exit status 1 (27.119875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-229000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-229000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-229000 describe svc hello-node-connect: exit status 1 (26.509667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-229000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (32.064833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-229000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (32.635584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "echo hello": exit status 83 (42.177333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"\n"*. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "cat /etc/hostname": exit status 83 (43.824583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-229000"- but got *"* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"\n"*. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (32.397834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (57.221334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-229000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh -n functional-229000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh -n functional-229000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.257334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-229000 ssh -n functional-229000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-229000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-229000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 cp functional-229000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3210456581/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 cp functional-229000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3210456581/001/cp-test.txt: exit status 83 (45.31475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-229000 cp functional-229000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3210456581/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh -n functional-229000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh -n functional-229000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.929375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-229000 ssh -n functional-229000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3210456581/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (52.564875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-229000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh -n functional-229000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh -n functional-229000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (41.851375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-229000 ssh -n functional-229000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-229000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-229000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7236/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/test/nested/copy/7236/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/test/nested/copy/7236/hosts": exit status 83 (42.837834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/test/nested/copy/7236/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-229000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-229000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (32.128375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7236.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/ssl/certs/7236.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/ssl/certs/7236.pem": exit status 83 (42.929667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7236.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"sudo cat /etc/ssl/certs/7236.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7236.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-229000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-229000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7236.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /usr/share/ca-certificates/7236.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /usr/share/ca-certificates/7236.pem": exit status 83 (46.791375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7236.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"sudo cat /usr/share/ca-certificates/7236.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7236.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-229000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-229000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (40.890167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-229000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-229000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/72362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/ssl/certs/72362.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/ssl/certs/72362.pem": exit status 83 (40.351417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/72362.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"sudo cat /etc/ssl/certs/72362.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/72362.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-229000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-229000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/72362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /usr/share/ca-certificates/72362.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /usr/share/ca-certificates/72362.pem": exit status 83 (41.610416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/72362.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"sudo cat /usr/share/ca-certificates/72362.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/72362.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-229000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-229000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (46.573375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-229000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-229000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-229000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (31.642167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-229000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-229000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (27.063167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-229000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-229000 -n functional-229000: exit status 7 (31.46175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-229000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo systemctl is-active crio": exit status 83 (40.45825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 version -o=json --components: exit status 83 (44.081417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-229000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-229000 image ls --format short --alsologtostderr:
I0318 13:32:34.639196    8134 out.go:291] Setting OutFile to fd 1 ...
I0318 13:32:34.639349    8134 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.639352    8134 out.go:304] Setting ErrFile to fd 2...
I0318 13:32:34.639354    8134 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.639485    8134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:32:34.639888    8134 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:32:34.639943    8134 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-229000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-229000 image ls --format table --alsologtostderr:
I0318 13:32:34.869824    8146 out.go:291] Setting OutFile to fd 1 ...
I0318 13:32:34.869969    8146 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.869977    8146 out.go:304] Setting ErrFile to fd 2...
I0318 13:32:34.869980    8146 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.870112    8146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:32:34.870552    8146 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:32:34.870615    8146 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-229000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-229000 image ls --format json --alsologtostderr:
I0318 13:32:34.833277    8144 out.go:291] Setting OutFile to fd 1 ...
I0318 13:32:34.833410    8144 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.833413    8144 out.go:304] Setting ErrFile to fd 2...
I0318 13:32:34.833415    8144 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.833544    8144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:32:34.833974    8144 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:32:34.834046    8144 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-229000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-229000 image ls --format yaml --alsologtostderr:
I0318 13:32:34.796750    8142 out.go:291] Setting OutFile to fd 1 ...
I0318 13:32:34.796915    8142 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.796918    8142 out.go:304] Setting ErrFile to fd 2...
I0318 13:32:34.796920    8142 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.797072    8142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:32:34.797471    8142 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:32:34.797529    8142 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh pgrep buildkitd: exit status 83 (43.781334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image build -t localhost/my-image:functional-229000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-229000 image build -t localhost/my-image:functional-229000 testdata/build --alsologtostderr:
I0318 13:32:34.721015    8138 out.go:291] Setting OutFile to fd 1 ...
I0318 13:32:34.721455    8138 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.721458    8138 out.go:304] Setting ErrFile to fd 2...
I0318 13:32:34.721461    8138 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:32:34.721606    8138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:32:34.722004    8138 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:32:34.722437    8138 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:32:34.722673    8138 build_images.go:133] succeeded building to: 
I0318 13:32:34.722676    8138 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls
functional_test.go:442: expected "localhost/my-image:functional-229000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-229000 docker-env) && out/minikube-darwin-arm64 status -p functional-229000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-229000 docker-env) && out/minikube-darwin-arm64 status -p functional-229000": exit status 1 (49.963125ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 update-context --alsologtostderr -v=2: exit status 83 (45.699708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:32:34.506182    8128 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:32:34.507136    8128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:34.507140    8128 out.go:304] Setting ErrFile to fd 2...
	I0318 13:32:34.507143    8128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:34.507309    8128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:32:34.507555    8128 mustload.go:65] Loading cluster: functional-229000
	I0318 13:32:34.507751    8128 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:32:34.512832    8128 out.go:177] * The control-plane node functional-229000 host is not running: state=Stopped
	I0318 13:32:34.516490    8128 out.go:177]   To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-229000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 update-context --alsologtostderr -v=2: exit status 83 (42.50075ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:32:34.595737    8132 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:32:34.595954    8132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:34.595960    8132 out.go:304] Setting ErrFile to fd 2...
	I0318 13:32:34.595962    8132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:34.596105    8132 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:32:34.596346    8132 mustload.go:65] Loading cluster: functional-229000
	I0318 13:32:34.596524    8132 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:32:34.599771    8132 out.go:177] * The control-plane node functional-229000 host is not running: state=Stopped
	I0318 13:32:34.603542    8132 out.go:177]   To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-229000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 update-context --alsologtostderr -v=2: exit status 83 (43.485125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:32:34.552210    8130 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:32:34.552352    8130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:34.552355    8130 out.go:304] Setting ErrFile to fd 2...
	I0318 13:32:34.552358    8130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:34.552473    8130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:32:34.552688    8130 mustload.go:65] Loading cluster: functional-229000
	I0318 13:32:34.552876    8130 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:32:34.556667    8130 out.go:177] * The control-plane node functional-229000 host is not running: state=Stopped
	I0318 13:32:34.560609    8130 out.go:177]   To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-229000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-229000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-229000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.528792ms)

                                                
                                                
** stderr ** 
	error: context "functional-229000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-229000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 service list: exit status 83 (45.61025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-229000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 service list -o json: exit status 83 (44.779833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-229000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 service --namespace=default --https --url hello-node: exit status 83 (45.663208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-229000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 service hello-node --url --format={{.IP}}: exit status 83 (44.871083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-229000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 service hello-node --url: exit status 83 (44.627667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-229000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test.go:1565: failed to parse "* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"": parse "* The control-plane node functional-229000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-229000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0318 13:31:43.465954    7873 out.go:291] Setting OutFile to fd 1 ...
I0318 13:31:43.466102    7873 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:31:43.466106    7873 out.go:304] Setting ErrFile to fd 2...
I0318 13:31:43.466109    7873 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:31:43.466232    7873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:31:43.466468    7873 mustload.go:65] Loading cluster: functional-229000
I0318 13:31:43.466666    7873 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:31:43.471554    7873 out.go:177] * The control-plane node functional-229000 host is not running: state=Stopped
I0318 13:31:43.479509    7873 out.go:177]   To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
stdout: * The control-plane node functional-229000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-229000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7874: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-229000": client config: context "functional-229000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (116.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-229000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-229000 get svc nginx-svc: exit status 1 (63.440167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-229000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-229000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (116.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image load --daemon gcr.io/google-containers/addon-resizer:functional-229000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-229000 image load --daemon gcr.io/google-containers/addon-resizer:functional-229000 --alsologtostderr: (1.276340833s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-229000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image load --daemon gcr.io/google-containers/addon-resizer:functional-229000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-229000 image load --daemon gcr.io/google-containers/addon-resizer:functional-229000 --alsologtostderr: (1.297785375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-229000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.3847635s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-229000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image load --daemon gcr.io/google-containers/addon-resizer:functional-229000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-229000 image load --daemon gcr.io/google-containers/addon-resizer:functional-229000 --alsologtostderr: (1.161175625s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-229000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image save gcr.io/google-containers/addon-resizer:functional-229000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-229000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.025405583s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-693000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-693000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.8273255s)

                                                
                                                
-- stdout --
	* [ha-693000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-693000" primary control-plane node in "ha-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:34:42.295915    8271 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:34:42.296045    8271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:34:42.296049    8271 out.go:304] Setting ErrFile to fd 2...
	I0318 13:34:42.296051    8271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:34:42.296174    8271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:34:42.297174    8271 out.go:298] Setting JSON to false
	I0318 13:34:42.313196    8271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5654,"bootTime":1710788428,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:34:42.313293    8271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:34:42.319595    8271 out.go:177] * [ha-693000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:34:42.327554    8271 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:34:42.327587    8271 notify.go:220] Checking for updates...
	I0318 13:34:42.329593    8271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:34:42.332485    8271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:34:42.335546    8271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:34:42.338556    8271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:34:42.341572    8271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:34:42.344666    8271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:34:42.348550    8271 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:34:42.355513    8271 start.go:297] selected driver: qemu2
	I0318 13:34:42.355519    8271 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:34:42.355524    8271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:34:42.357753    8271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:34:42.360497    8271 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:34:42.363503    8271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:34:42.363538    8271 cni.go:84] Creating CNI manager for ""
	I0318 13:34:42.363543    8271 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 13:34:42.363553    8271 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 13:34:42.363578    8271 start.go:340] cluster config:
	{Name:ha-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:34:42.367970    8271 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:34:42.375352    8271 out.go:177] * Starting "ha-693000" primary control-plane node in "ha-693000" cluster
	I0318 13:34:42.379496    8271 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:34:42.379512    8271 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:34:42.379526    8271 cache.go:56] Caching tarball of preloaded images
	I0318 13:34:42.379590    8271 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:34:42.379595    8271 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:34:42.379810    8271 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/ha-693000/config.json ...
	I0318 13:34:42.379822    8271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/ha-693000/config.json: {Name:mkca59b50c66fa560931380bbc3139e1c74c820c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:34:42.380030    8271 start.go:360] acquireMachinesLock for ha-693000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:34:42.380059    8271 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "ha-693000"
	I0318 13:34:42.380075    8271 start.go:93] Provisioning new machine with config: &{Name:ha-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:34:42.380103    8271 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:34:42.387475    8271 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:34:42.402156    8271 start.go:159] libmachine.API.Create for "ha-693000" (driver="qemu2")
	I0318 13:34:42.402179    8271 client.go:168] LocalClient.Create starting
	I0318 13:34:42.402239    8271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:34:42.402271    8271 main.go:141] libmachine: Decoding PEM data...
	I0318 13:34:42.402281    8271 main.go:141] libmachine: Parsing certificate...
	I0318 13:34:42.402334    8271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:34:42.402355    8271 main.go:141] libmachine: Decoding PEM data...
	I0318 13:34:42.402364    8271 main.go:141] libmachine: Parsing certificate...
	I0318 13:34:42.402741    8271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:34:42.546455    8271 main.go:141] libmachine: Creating SSH key...
	I0318 13:34:42.653472    8271 main.go:141] libmachine: Creating Disk image...
	I0318 13:34:42.653480    8271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:34:42.653661    8271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:34:42.666001    8271 main.go:141] libmachine: STDOUT: 
	I0318 13:34:42.666025    8271 main.go:141] libmachine: STDERR: 
	I0318 13:34:42.666074    8271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2 +20000M
	I0318 13:34:42.676641    8271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:34:42.676656    8271 main.go:141] libmachine: STDERR: 
	I0318 13:34:42.676676    8271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:34:42.676686    8271 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:34:42.676713    8271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:3d:8e:d7:ed:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:34:42.678490    8271 main.go:141] libmachine: STDOUT: 
	I0318 13:34:42.678515    8271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:34:42.678534    8271 client.go:171] duration metric: took 276.340875ms to LocalClient.Create
	I0318 13:34:44.680849    8271 start.go:128] duration metric: took 2.300631959s to createHost
	I0318 13:34:44.680983    8271 start.go:83] releasing machines lock for "ha-693000", held for 2.300831334s
	W0318 13:34:44.681043    8271 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:34:44.697279    8271 out.go:177] * Deleting "ha-693000" in qemu2 ...
	W0318 13:34:44.724888    8271 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:34:44.724967    8271 start.go:728] Will try again in 5 seconds ...
	I0318 13:34:49.725571    8271 start.go:360] acquireMachinesLock for ha-693000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:34:49.725972    8271 start.go:364] duration metric: took 311.875µs to acquireMachinesLock for "ha-693000"
	I0318 13:34:49.726091    8271 start.go:93] Provisioning new machine with config: &{Name:ha-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:34:49.726366    8271 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:34:49.743235    8271 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:34:49.794536    8271 start.go:159] libmachine.API.Create for "ha-693000" (driver="qemu2")
	I0318 13:34:49.794587    8271 client.go:168] LocalClient.Create starting
	I0318 13:34:49.794719    8271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:34:49.794787    8271 main.go:141] libmachine: Decoding PEM data...
	I0318 13:34:49.794805    8271 main.go:141] libmachine: Parsing certificate...
	I0318 13:34:49.794883    8271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:34:49.794932    8271 main.go:141] libmachine: Decoding PEM data...
	I0318 13:34:49.794949    8271 main.go:141] libmachine: Parsing certificate...
	I0318 13:34:49.795598    8271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:34:49.953096    8271 main.go:141] libmachine: Creating SSH key...
	I0318 13:34:50.015956    8271 main.go:141] libmachine: Creating Disk image...
	I0318 13:34:50.015961    8271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:34:50.016120    8271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:34:50.028334    8271 main.go:141] libmachine: STDOUT: 
	I0318 13:34:50.028358    8271 main.go:141] libmachine: STDERR: 
	I0318 13:34:50.028423    8271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2 +20000M
	I0318 13:34:50.039066    8271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:34:50.039082    8271 main.go:141] libmachine: STDERR: 
	I0318 13:34:50.039093    8271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:34:50.039101    8271 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:34:50.039127    8271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:0e:fb:ad:ef:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:34:50.040840    8271 main.go:141] libmachine: STDOUT: 
	I0318 13:34:50.040856    8271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:34:50.040869    8271 client.go:171] duration metric: took 246.268834ms to LocalClient.Create
	I0318 13:34:52.043082    8271 start.go:128] duration metric: took 2.316640417s to createHost
	I0318 13:34:52.043203    8271 start.go:83] releasing machines lock for "ha-693000", held for 2.317159209s
	W0318 13:34:52.043515    8271 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:34:52.059065    8271 out.go:177] 
	W0318 13:34:52.064311    8271 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:34:52.064339    8271 out.go:239] * 
	* 
	W0318 13:34:52.066759    8271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:34:52.079232    8271 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-693000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (69.253917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (116.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.763792ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-693000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- rollout status deployment/busybox: exit status 1 (57.476125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.675375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.131292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.099666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.380333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.465291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.130916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.793083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.195583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.545584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.311333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.695667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.755292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.854167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.473542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.78075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (32.17925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (116.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-693000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.164042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-693000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (31.170334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-693000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-693000 -v=7 --alsologtostderr: exit status 83 (43.127416ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-693000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:48.431311    8447 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:48.431911    8447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.431914    8447 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:48.431917    8447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.432040    8447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:48.432281    8447 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:48.432454    8447 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:48.437266    8447 out.go:177] * The control-plane node ha-693000 host is not running: state=Stopped
	I0318 13:36:48.441269    8447 out.go:177]   To start a cluster, run: "minikube start -p ha-693000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-693000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (31.570291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-693000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-693000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.656708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-693000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-693000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-693000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (32.102959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-693000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-693000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-693000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-693000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-693000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-693000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-693000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-693000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (31.75075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status --output json -v=7 --alsologtostderr: exit status 7 (32.312584ms)

                                                
                                                
-- stdout --
	{"Name":"ha-693000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:48.673335    8460 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:48.673518    8460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.673521    8460 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:48.673524    8460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.673665    8460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:48.673781    8460 out.go:298] Setting JSON to true
	I0318 13:36:48.673793    8460 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:48.673844    8460 notify.go:220] Checking for updates...
	I0318 13:36:48.673966    8460 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:48.673974    8460 status.go:255] checking status of ha-693000 ...
	I0318 13:36:48.674196    8460 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:36:48.674199    8460 status.go:343] host is not running, skipping remaining checks
	I0318 13:36:48.674202    8460 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-693000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (32.803875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.944291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:48.738880    8464 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:48.739113    8464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.739116    8464 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:48.739118    8464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.739248    8464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:48.739521    8464 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:48.739736    8464 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:48.744729    8464 out.go:177] 
	W0318 13:36:48.747707    8464 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0318 13:36:48.747717    8464 out.go:239] * 
	* 
	W0318 13:36:48.749588    8464 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:36:48.753672    8464 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-693000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (32.111791ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:48.789115    8466 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:48.789264    8466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.789267    8466 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:48.789268    8466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.789389    8466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:48.789515    8466 out.go:298] Setting JSON to false
	I0318 13:36:48.789526    8466 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:48.789578    8466 notify.go:220] Checking for updates...
	I0318 13:36:48.789714    8466 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:48.789720    8466 status.go:255] checking status of ha-693000 ...
	I0318 13:36:48.789929    8466 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:36:48.789933    8466 status.go:343] host is not running, skipping remaining checks
	I0318 13:36:48.789936    8466 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr": ha-693000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr": ha-693000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr": ha-693000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr": ha-693000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (31.979584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-693000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-693000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-693000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-693000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (31.741625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (56.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 node start m02 -v=7 --alsologtostderr: exit status 85 (49.7985ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:48.958844    8476 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:48.959083    8476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.959086    8476 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:48.959088    8476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:48.959216    8476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:48.959451    8476 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:48.959636    8476 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:48.964440    8476 out.go:177] 
	W0318 13:36:48.967575    8476 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0318 13:36:48.967586    8476 out.go:239] * 
	* 
	W0318 13:36:48.969461    8476 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:36:48.973489    8476 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0318 13:36:48.958844    8476 out.go:291] Setting OutFile to fd 1 ...
I0318 13:36:48.959083    8476 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:36:48.959086    8476 out.go:304] Setting ErrFile to fd 2...
I0318 13:36:48.959088    8476 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:36:48.959216    8476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:36:48.959451    8476 mustload.go:65] Loading cluster: ha-693000
I0318 13:36:48.959636    8476 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:36:48.964440    8476 out.go:177] 
W0318 13:36:48.967575    8476 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0318 13:36:48.967586    8476 out.go:239] * 
* 
W0318 13:36:48.969461    8476 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 13:36:48.973489    8476 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-693000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (32.129ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:49.008705    8478 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:49.008865    8478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:49.008868    8478 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:49.008871    8478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:49.009018    8478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:49.009147    8478 out.go:298] Setting JSON to false
	I0318 13:36:49.009158    8478 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:49.009209    8478 notify.go:220] Checking for updates...
	I0318 13:36:49.009443    8478 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:49.009451    8478 status.go:255] checking status of ha-693000 ...
	I0318 13:36:49.009640    8478 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:36:49.009644    8478 status.go:343] host is not running, skipping remaining checks
	I0318 13:36:49.009646    8478 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (75.886541ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:50.238448    8480 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:50.238650    8480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:50.238655    8480 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:50.238658    8480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:50.238824    8480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:50.238986    8480 out.go:298] Setting JSON to false
	I0318 13:36:50.239002    8480 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:50.239051    8480 notify.go:220] Checking for updates...
	I0318 13:36:50.239247    8480 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:50.239256    8480 status.go:255] checking status of ha-693000 ...
	I0318 13:36:50.239533    8480 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:36:50.239538    8480 status.go:343] host is not running, skipping remaining checks
	I0318 13:36:50.239540    8480 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (74.874666ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:52.094889    8482 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:52.095078    8482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:52.095083    8482 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:52.095086    8482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:52.095274    8482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:52.095449    8482 out.go:298] Setting JSON to false
	I0318 13:36:52.095464    8482 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:52.095501    8482 notify.go:220] Checking for updates...
	I0318 13:36:52.095692    8482 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:52.095704    8482 status.go:255] checking status of ha-693000 ...
	I0318 13:36:52.095943    8482 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:36:52.095947    8482 status.go:343] host is not running, skipping remaining checks
	I0318 13:36:52.095949    8482 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (76.770083ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:55.113310    8486 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:55.113506    8486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:55.113510    8486 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:55.113513    8486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:55.113674    8486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:55.113834    8486 out.go:298] Setting JSON to false
	I0318 13:36:55.113853    8486 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:55.113881    8486 notify.go:220] Checking for updates...
	I0318 13:36:55.114122    8486 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:55.114130    8486 status.go:255] checking status of ha-693000 ...
	I0318 13:36:55.114399    8486 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:36:55.114404    8486 status.go:343] host is not running, skipping remaining checks
	I0318 13:36:55.114408    8486 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (76.670291ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:36:58.087267    8492 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:36:58.087449    8492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:58.087453    8492 out.go:304] Setting ErrFile to fd 2...
	I0318 13:36:58.087456    8492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:36:58.087608    8492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:36:58.087754    8492 out.go:298] Setting JSON to false
	I0318 13:36:58.087769    8492 mustload.go:65] Loading cluster: ha-693000
	I0318 13:36:58.087802    8492 notify.go:220] Checking for updates...
	I0318 13:36:58.088027    8492 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:36:58.088035    8492 status.go:255] checking status of ha-693000 ...
	I0318 13:36:58.088311    8492 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:36:58.088316    8492 status.go:343] host is not running, skipping remaining checks
	I0318 13:36:58.088319    8492 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (78.896041ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:04.963739    8498 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:04.964038    8498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:04.964045    8498 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:04.964048    8498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:04.964224    8498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:37:04.964373    8498 out.go:298] Setting JSON to false
	I0318 13:37:04.964396    8498 mustload.go:65] Loading cluster: ha-693000
	I0318 13:37:04.964423    8498 notify.go:220] Checking for updates...
	I0318 13:37:04.964636    8498 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:37:04.964645    8498 status.go:255] checking status of ha-693000 ...
	I0318 13:37:04.964941    8498 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:37:04.964946    8498 status.go:343] host is not running, skipping remaining checks
	I0318 13:37:04.964949    8498 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (75.843833ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:13.979993    8504 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:13.980183    8504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:13.980187    8504 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:13.980190    8504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:13.980348    8504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:37:13.980503    8504 out.go:298] Setting JSON to false
	I0318 13:37:13.980525    8504 mustload.go:65] Loading cluster: ha-693000
	I0318 13:37:13.980557    8504 notify.go:220] Checking for updates...
	I0318 13:37:13.980757    8504 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:37:13.980764    8504 status.go:255] checking status of ha-693000 ...
	I0318 13:37:13.981028    8504 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:37:13.981032    8504 status.go:343] host is not running, skipping remaining checks
	I0318 13:37:13.981038    8504 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (75.685666ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:19.805841    8510 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:19.806018    8510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:19.806023    8510 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:19.806026    8510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:19.806198    8510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:37:19.806376    8510 out.go:298] Setting JSON to false
	I0318 13:37:19.806391    8510 mustload.go:65] Loading cluster: ha-693000
	I0318 13:37:19.806431    8510 notify.go:220] Checking for updates...
	I0318 13:37:19.806628    8510 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:37:19.806637    8510 status.go:255] checking status of ha-693000 ...
	I0318 13:37:19.806902    8510 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:37:19.806907    8510 status.go:343] host is not running, skipping remaining checks
	I0318 13:37:19.806910    8510 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (76.655334ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:45.414295    8527 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:45.414499    8527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:45.414504    8527 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:45.414507    8527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:45.414680    8527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:37:45.414863    8527 out.go:298] Setting JSON to false
	I0318 13:37:45.414878    8527 mustload.go:65] Loading cluster: ha-693000
	I0318 13:37:45.414927    8527 notify.go:220] Checking for updates...
	I0318 13:37:45.415160    8527 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:37:45.415170    8527 status.go:255] checking status of ha-693000 ...
	I0318 13:37:45.415414    8527 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:37:45.415419    8527 status.go:343] host is not running, skipping remaining checks
	I0318 13:37:45.415422    8527 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (34.500458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (56.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-693000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-693000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-693000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-693000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-693000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-693000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-693000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-693000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (31.944875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-693000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-693000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-693000 -v=7 --alsologtostderr: (1.882811333s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-693000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-693000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.227970625s)

                                                
                                                
-- stdout --
	* [ha-693000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-693000" primary control-plane node in "ha-693000" cluster
	* Restarting existing qemu2 VM for "ha-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:47.539594    8551 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:47.539770    8551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:47.539774    8551 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:47.539777    8551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:47.539934    8551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:37:47.541181    8551 out.go:298] Setting JSON to false
	I0318 13:37:47.559819    8551 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5839,"bootTime":1710788428,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:37:47.559879    8551 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:37:47.565235    8551 out.go:177] * [ha-693000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:37:47.572180    8551 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:37:47.576010    8551 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:37:47.572224    8551 notify.go:220] Checking for updates...
	I0318 13:37:47.579149    8551 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:37:47.582155    8551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:37:47.585141    8551 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:37:47.588132    8551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:37:47.591504    8551 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:37:47.591564    8551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:37:47.596122    8551 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:37:47.603146    8551 start.go:297] selected driver: qemu2
	I0318 13:37:47.603153    8551 start.go:901] validating driver "qemu2" against &{Name:ha-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:37:47.603230    8551 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:37:47.605597    8551 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:37:47.605647    8551 cni.go:84] Creating CNI manager for ""
	I0318 13:37:47.605652    8551 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 13:37:47.605709    8551 start.go:340] cluster config:
	{Name:ha-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-693000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:37:47.610177    8551 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:37:47.617151    8551 out.go:177] * Starting "ha-693000" primary control-plane node in "ha-693000" cluster
	I0318 13:37:47.621095    8551 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:37:47.621109    8551 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:37:47.621123    8551 cache.go:56] Caching tarball of preloaded images
	I0318 13:37:47.621176    8551 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:37:47.621182    8551 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:37:47.621251    8551 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/ha-693000/config.json ...
	I0318 13:37:47.621728    8551 start.go:360] acquireMachinesLock for ha-693000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:37:47.621763    8551 start.go:364] duration metric: took 29.542µs to acquireMachinesLock for "ha-693000"
	I0318 13:37:47.621772    8551 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:37:47.621776    8551 fix.go:54] fixHost starting: 
	I0318 13:37:47.621885    8551 fix.go:112] recreateIfNeeded on ha-693000: state=Stopped err=<nil>
	W0318 13:37:47.621894    8551 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:37:47.626089    8551 out.go:177] * Restarting existing qemu2 VM for "ha-693000" ...
	I0318 13:37:47.633094    8551 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:0e:fb:ad:ef:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:37:47.635161    8551 main.go:141] libmachine: STDOUT: 
	I0318 13:37:47.635182    8551 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:37:47.635209    8551 fix.go:56] duration metric: took 13.431833ms for fixHost
	I0318 13:37:47.635213    8551 start.go:83] releasing machines lock for "ha-693000", held for 13.446042ms
	W0318 13:37:47.635221    8551 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:37:47.635253    8551 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:37:47.635258    8551 start.go:728] Will try again in 5 seconds ...
	I0318 13:37:52.637408    8551 start.go:360] acquireMachinesLock for ha-693000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:37:52.637931    8551 start.go:364] duration metric: took 394.75µs to acquireMachinesLock for "ha-693000"
	I0318 13:37:52.638074    8551 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:37:52.638101    8551 fix.go:54] fixHost starting: 
	I0318 13:37:52.638808    8551 fix.go:112] recreateIfNeeded on ha-693000: state=Stopped err=<nil>
	W0318 13:37:52.638834    8551 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:37:52.644293    8551 out.go:177] * Restarting existing qemu2 VM for "ha-693000" ...
	I0318 13:37:52.652343    8551 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:0e:fb:ad:ef:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:37:52.662527    8551 main.go:141] libmachine: STDOUT: 
	I0318 13:37:52.662596    8551 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:37:52.662694    8551 fix.go:56] duration metric: took 24.598541ms for fixHost
	I0318 13:37:52.662715    8551 start.go:83] releasing machines lock for "ha-693000", held for 24.75975ms
	W0318 13:37:52.662916    8551 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:37:52.670245    8551 out.go:177] 
	W0318 13:37:52.674171    8551 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:37:52.674288    8551 out.go:239] * 
	* 
	W0318 13:37:52.677201    8551 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:37:52.685187    8551 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-693000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-693000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (34.54375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 node delete m03 -v=7 --alsologtostderr: exit status 83 (43.022708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-693000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:52.835674    8568 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:52.836065    8568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:52.836069    8568 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:52.836071    8568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:52.836266    8568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:37:52.836493    8568 mustload.go:65] Loading cluster: ha-693000
	I0318 13:37:52.836695    8568 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:37:52.841595    8568 out.go:177] * The control-plane node ha-693000 host is not running: state=Stopped
	I0318 13:37:52.844604    8568 out.go:177]   To start a cluster, run: "minikube start -p ha-693000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-693000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (32.104292ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:52.879089    8570 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:52.879228    8570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:52.879232    8570 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:52.879234    8570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:52.879360    8570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:37:52.879484    8570 out.go:298] Setting JSON to false
	I0318 13:37:52.879495    8570 mustload.go:65] Loading cluster: ha-693000
	I0318 13:37:52.879546    8570 notify.go:220] Checking for updates...
	I0318 13:37:52.879696    8570 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:37:52.879702    8570 status.go:255] checking status of ha-693000 ...
	I0318 13:37:52.879898    8570 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:37:52.879902    8570 status.go:343] host is not running, skipping remaining checks
	I0318 13:37:52.879904    8570 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (32.281459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-693000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-693000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-693000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-693000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (31.518583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-693000 stop -v=7 --alsologtostderr: (3.340708416s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr: exit status 7 (69.029291ms)

                                                
                                                
-- stdout --
	ha-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:56.425096    8600 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:56.425243    8600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:56.425248    8600 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:56.425250    8600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:56.425413    8600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:37:56.425555    8600 out.go:298] Setting JSON to false
	I0318 13:37:56.425570    8600 mustload.go:65] Loading cluster: ha-693000
	I0318 13:37:56.425606    8600 notify.go:220] Checking for updates...
	I0318 13:37:56.425822    8600 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:37:56.425832    8600 status.go:255] checking status of ha-693000 ...
	I0318 13:37:56.426088    8600 status.go:330] ha-693000 host status = "Stopped" (err=<nil>)
	I0318 13:37:56.426093    8600 status.go:343] host is not running, skipping remaining checks
	I0318 13:37:56.426096    8600 status.go:257] ha-693000 status: &{Name:ha-693000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr": ha-693000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr": ha-693000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-693000 status -v=7 --alsologtostderr": ha-693000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (34.032667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-693000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-693000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182371916s)

                                                
                                                
-- stdout --
	* [ha-693000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-693000" primary control-plane node in "ha-693000" cluster
	* Restarting existing qemu2 VM for "ha-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:56.491412    8604 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:56.491541    8604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:56.491544    8604 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:56.491547    8604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:56.491672    8604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:37:56.492717    8604 out.go:298] Setting JSON to false
	I0318 13:37:56.509060    8604 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5848,"bootTime":1710788428,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:37:56.509125    8604 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:37:56.513159    8604 out.go:177] * [ha-693000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:37:56.520199    8604 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:37:56.520226    8604 notify.go:220] Checking for updates...
	I0318 13:37:56.528073    8604 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:37:56.531167    8604 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:37:56.534140    8604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:37:56.542141    8604 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:37:56.545174    8604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:37:56.548471    8604 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:37:56.548750    8604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:37:56.553132    8604 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:37:56.560138    8604 start.go:297] selected driver: qemu2
	I0318 13:37:56.560145    8604 start.go:901] validating driver "qemu2" against &{Name:ha-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:37:56.560196    8604 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:37:56.562514    8604 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:37:56.562568    8604 cni.go:84] Creating CNI manager for ""
	I0318 13:37:56.562572    8604 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 13:37:56.562615    8604 start.go:340] cluster config:
	{Name:ha-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-693000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:37:56.567161    8604 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:37:56.574010    8604 out.go:177] * Starting "ha-693000" primary control-plane node in "ha-693000" cluster
	I0318 13:37:56.578087    8604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:37:56.578101    8604 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:37:56.578114    8604 cache.go:56] Caching tarball of preloaded images
	I0318 13:37:56.578168    8604 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:37:56.578174    8604 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:37:56.578247    8604 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/ha-693000/config.json ...
	I0318 13:37:56.578722    8604 start.go:360] acquireMachinesLock for ha-693000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:37:56.578747    8604 start.go:364] duration metric: took 19.75µs to acquireMachinesLock for "ha-693000"
	I0318 13:37:56.578756    8604 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:37:56.578761    8604 fix.go:54] fixHost starting: 
	I0318 13:37:56.578870    8604 fix.go:112] recreateIfNeeded on ha-693000: state=Stopped err=<nil>
	W0318 13:37:56.578879    8604 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:37:56.580713    8604 out.go:177] * Restarting existing qemu2 VM for "ha-693000" ...
	I0318 13:37:56.589148    8604 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:0e:fb:ad:ef:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:37:56.591182    8604 main.go:141] libmachine: STDOUT: 
	I0318 13:37:56.591205    8604 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:37:56.591235    8604 fix.go:56] duration metric: took 12.474125ms for fixHost
	I0318 13:37:56.591239    8604 start.go:83] releasing machines lock for "ha-693000", held for 12.4885ms
	W0318 13:37:56.591248    8604 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:37:56.591274    8604 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:37:56.591279    8604 start.go:728] Will try again in 5 seconds ...
	I0318 13:38:01.592079    8604 start.go:360] acquireMachinesLock for ha-693000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:38:01.592382    8604 start.go:364] duration metric: took 226.667µs to acquireMachinesLock for "ha-693000"
	I0318 13:38:01.592457    8604 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:38:01.592478    8604 fix.go:54] fixHost starting: 
	I0318 13:38:01.593070    8604 fix.go:112] recreateIfNeeded on ha-693000: state=Stopped err=<nil>
	W0318 13:38:01.593093    8604 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:38:01.597450    8604 out.go:177] * Restarting existing qemu2 VM for "ha-693000" ...
	I0318 13:38:01.602373    8604 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:0e:fb:ad:ef:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/ha-693000/disk.qcow2
	I0318 13:38:01.610377    8604 main.go:141] libmachine: STDOUT: 
	I0318 13:38:01.610445    8604 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:38:01.610524    8604 fix.go:56] duration metric: took 18.05075ms for fixHost
	I0318 13:38:01.610538    8604 start.go:83] releasing machines lock for "ha-693000", held for 18.137292ms
	W0318 13:38:01.610731    8604 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:38:01.619250    8604 out.go:177] 
	W0318 13:38:01.623473    8604 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:38:01.623530    8604 out.go:239] * 
	* 
	W0318 13:38:01.625834    8604 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:38:01.632368    8604 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-693000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (47.653542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-693000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-693000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-693000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-693000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (33.93075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-693000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-693000 --control-plane -v=7 --alsologtostderr: exit status 83 (49.552458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-693000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:38:01.837663    8624 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:38:01.837835    8624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:38:01.837838    8624 out.go:304] Setting ErrFile to fd 2...
	I0318 13:38:01.837841    8624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:38:01.837982    8624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:38:01.838259    8624 mustload.go:65] Loading cluster: ha-693000
	I0318 13:38:01.838466    8624 config.go:182] Loaded profile config "ha-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:38:01.842839    8624 out.go:177] * The control-plane node ha-693000 host is not running: state=Stopped
	I0318 13:38:01.849891    8624 out.go:177]   To start a cluster, run: "minikube start -p ha-693000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-693000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (32.667042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-693000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-693000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-693000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-693000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-693000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-693000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-693000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-693000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-693000 -n ha-693000: exit status 7 (31.071166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-597000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-597000 --driver=qemu2 : exit status 80 (9.752560042s)

                                                
                                                
-- stdout --
	* [image-597000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-597000" primary control-plane node in "image-597000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-597000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-597000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-597000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-597000 -n image-597000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-597000 -n image-597000: exit status 7 (69.316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-597000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-291000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-291000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.847905583s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d1e54b0-0440-4648-9426-12a880b2d69f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-291000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"83c9d2a1-0ab2-4a70-82d1-04e5657bae28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18421"}}
	{"specversion":"1.0","id":"14cc4569-4b51-47c1-897a-94a27f884195","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig"}}
	{"specversion":"1.0","id":"0941ba8b-d047-41f6-afa4-5c81098d3da4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"330c73d2-44b6-4950-8d14-138433144878","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"36ba90f4-f210-4400-8ca8-2d42fb5881df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube"}}
	{"specversion":"1.0","id":"f946e0dd-c300-4503-ae03-454e57c9f2ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f86286bc-1f72-4759-8681-60351b1171e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a94798b-b614-42c7-8dd4-5331b0b36f2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"278d04cc-3606-441d-a71a-02ee1771e543","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-291000\" primary control-plane node in \"json-output-291000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"16eb8652-591e-4e30-9883-864204b2447b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"9a932df0-bcd6-4a8a-abfc-75c221068ff8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-291000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c2906ec-2ab0-4e1a-9aba-2da3c385bd58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a49508bf-271d-4d9b-bd81-0e679aae296c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"6e96506e-5f31-40fc-8a7d-51e35ad6afc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-291000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6d303eec-13c1-418b-9e77-b588123755d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"e19a9735-42a2-4a38-bbf7-456eea0a9658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-291000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-291000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-291000 --output=json --user=testUser: exit status 83 (81.526709ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6002d140-d0fa-4372-887c-1def97047a5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-291000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"83506db8-b869-47a3-9931-64d4e265edd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-291000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-291000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-291000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-291000 --output=json --user=testUser: exit status 83 (47.439709ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-291000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-291000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-291000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-291000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-393000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-393000 --driver=qemu2 : exit status 80 (9.859427708s)

                                                
                                                
-- stdout --
	* [first-393000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-393000" primary control-plane node in "first-393000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-393000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-393000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-393000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-18 13:38:35.817447 -0700 PDT m=+597.433568293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-394000 -n second-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-394000 -n second-394000: exit status 85 (82.172667ms)

                                                
                                                
-- stdout --
	* Profile "second-394000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-394000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-394000" host is not running, skipping log retrieval (state="* Profile \"second-394000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-394000\"")
helpers_test.go:175: Cleaning up "second-394000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-394000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-18 13:38:36.129791 -0700 PDT m=+597.745912793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-393000 -n first-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-393000 -n first-393000: exit status 7 (31.610041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-393000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-393000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-393000
--- FAIL: TestMinikubeProfile (10.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (11.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-806000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-806000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.944441s)

                                                
                                                
-- stdout --
	* [mount-start-1-806000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-806000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-806000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-806000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-806000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-806000 -n mount-start-1-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-806000 -n mount-start-1-806000: exit status 7 (70.445667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (11.02s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-685000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-685000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.859763834s)

                                                
                                                
-- stdout --
	* [multinode-685000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-685000" primary control-plane node in "multinode-685000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-685000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:38:47.645803    8811 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:38:47.645946    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:38:47.645949    8811 out.go:304] Setting ErrFile to fd 2...
	I0318 13:38:47.645951    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:38:47.646079    8811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:38:47.647114    8811 out.go:298] Setting JSON to false
	I0318 13:38:47.663219    8811 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5899,"bootTime":1710788428,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:38:47.663293    8811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:38:47.669779    8811 out.go:177] * [multinode-685000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:38:47.676784    8811 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:38:47.680807    8811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:38:47.676868    8811 notify.go:220] Checking for updates...
	I0318 13:38:47.686719    8811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:38:47.689738    8811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:38:47.692697    8811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:38:47.695729    8811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:38:47.698981    8811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:38:47.702626    8811 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:38:47.709704    8811 start.go:297] selected driver: qemu2
	I0318 13:38:47.709710    8811 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:38:47.709718    8811 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:38:47.711975    8811 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:38:47.713533    8811 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:38:47.716793    8811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:38:47.716839    8811 cni.go:84] Creating CNI manager for ""
	I0318 13:38:47.716845    8811 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 13:38:47.716849    8811 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 13:38:47.716887    8811 start.go:340] cluster config:
	{Name:multinode-685000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:38:47.721310    8811 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:38:47.728692    8811 out.go:177] * Starting "multinode-685000" primary control-plane node in "multinode-685000" cluster
	I0318 13:38:47.732742    8811 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:38:47.732758    8811 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:38:47.732768    8811 cache.go:56] Caching tarball of preloaded images
	I0318 13:38:47.732850    8811 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:38:47.732861    8811 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:38:47.733107    8811 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/multinode-685000/config.json ...
	I0318 13:38:47.733120    8811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/multinode-685000/config.json: {Name:mkbc16d35e7f84c8293e8c3b05bcc7ba575cc752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:38:47.733336    8811 start.go:360] acquireMachinesLock for multinode-685000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:38:47.733370    8811 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "multinode-685000"
	I0318 13:38:47.733383    8811 start.go:93] Provisioning new machine with config: &{Name:multinode-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:38:47.733412    8811 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:38:47.741682    8811 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:38:47.759390    8811 start.go:159] libmachine.API.Create for "multinode-685000" (driver="qemu2")
	I0318 13:38:47.759416    8811 client.go:168] LocalClient.Create starting
	I0318 13:38:47.759483    8811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:38:47.759511    8811 main.go:141] libmachine: Decoding PEM data...
	I0318 13:38:47.759522    8811 main.go:141] libmachine: Parsing certificate...
	I0318 13:38:47.759567    8811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:38:47.759594    8811 main.go:141] libmachine: Decoding PEM data...
	I0318 13:38:47.759602    8811 main.go:141] libmachine: Parsing certificate...
	I0318 13:38:47.759952    8811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:38:47.904490    8811 main.go:141] libmachine: Creating SSH key...
	I0318 13:38:47.987340    8811 main.go:141] libmachine: Creating Disk image...
	I0318 13:38:47.987345    8811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:38:47.987531    8811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:38:48.000094    8811 main.go:141] libmachine: STDOUT: 
	I0318 13:38:48.000111    8811 main.go:141] libmachine: STDERR: 
	I0318 13:38:48.000167    8811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2 +20000M
	I0318 13:38:48.011065    8811 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:38:48.011092    8811 main.go:141] libmachine: STDERR: 
	I0318 13:38:48.011103    8811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:38:48.011114    8811 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:38:48.011140    8811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:57:9d:a5:7a:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:38:48.012851    8811 main.go:141] libmachine: STDOUT: 
	I0318 13:38:48.012865    8811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:38:48.012885    8811 client.go:171] duration metric: took 253.463708ms to LocalClient.Create
	I0318 13:38:50.013211    8811 start.go:128] duration metric: took 2.279779458s to createHost
	I0318 13:38:50.013288    8811 start.go:83] releasing machines lock for "multinode-685000", held for 2.279920458s
	W0318 13:38:50.013356    8811 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:38:50.024443    8811 out.go:177] * Deleting "multinode-685000" in qemu2 ...
	W0318 13:38:50.059285    8811 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:38:50.059315    8811 start.go:728] Will try again in 5 seconds ...
	I0318 13:38:55.061453    8811 start.go:360] acquireMachinesLock for multinode-685000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:38:55.061925    8811 start.go:364] duration metric: took 297.25µs to acquireMachinesLock for "multinode-685000"
	I0318 13:38:55.062073    8811 start.go:93] Provisioning new machine with config: &{Name:multinode-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:38:55.062368    8811 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:38:55.074972    8811 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:38:55.124280    8811 start.go:159] libmachine.API.Create for "multinode-685000" (driver="qemu2")
	I0318 13:38:55.124324    8811 client.go:168] LocalClient.Create starting
	I0318 13:38:55.124431    8811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:38:55.124501    8811 main.go:141] libmachine: Decoding PEM data...
	I0318 13:38:55.124514    8811 main.go:141] libmachine: Parsing certificate...
	I0318 13:38:55.124585    8811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:38:55.124627    8811 main.go:141] libmachine: Decoding PEM data...
	I0318 13:38:55.124642    8811 main.go:141] libmachine: Parsing certificate...
	I0318 13:38:55.125156    8811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:38:55.281130    8811 main.go:141] libmachine: Creating SSH key...
	I0318 13:38:55.400658    8811 main.go:141] libmachine: Creating Disk image...
	I0318 13:38:55.400667    8811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:38:55.400877    8811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:38:55.413428    8811 main.go:141] libmachine: STDOUT: 
	I0318 13:38:55.413452    8811 main.go:141] libmachine: STDERR: 
	I0318 13:38:55.413510    8811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2 +20000M
	I0318 13:38:55.424195    8811 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:38:55.424295    8811 main.go:141] libmachine: STDERR: 
	I0318 13:38:55.424307    8811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:38:55.424313    8811 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:38:55.424354    8811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:6b:88:37:29:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:38:55.426084    8811 main.go:141] libmachine: STDOUT: 
	I0318 13:38:55.426098    8811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:38:55.426111    8811 client.go:171] duration metric: took 301.782ms to LocalClient.Create
	I0318 13:38:57.428302    8811 start.go:128] duration metric: took 2.365901208s to createHost
	I0318 13:38:57.428383    8811 start.go:83] releasing machines lock for "multinode-685000", held for 2.366426541s
	W0318 13:38:57.428759    8811 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-685000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-685000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:38:57.444372    8811 out.go:177] 
	W0318 13:38:57.448540    8811 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:38:57.448575    8811 out.go:239] * 
	* 
	W0318 13:38:57.451321    8811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:38:57.461400    8811 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-685000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (68.953459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (120.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.634333ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-685000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- rollout status deployment/busybox: exit status 1 (57.908375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.833417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.045333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.190084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.747333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.225125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.404ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.4365ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.350458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.135625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.5815ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.719542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.588333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.069583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.20775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.254458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (32.474917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (120.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.179375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (31.955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-685000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-685000 -v 3 --alsologtostderr: exit status 83 (43.542292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-685000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-685000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:40:58.107910    8963 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:40:58.108050    8963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.108054    8963 out.go:304] Setting ErrFile to fd 2...
	I0318 13:40:58.108056    8963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.108176    8963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:40:58.108383    8963 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:40:58.108559    8963 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:40:58.113591    8963 out.go:177] * The control-plane node multinode-685000 host is not running: state=Stopped
	I0318 13:40:58.116554    8963 out.go:177]   To start a cluster, run: "minikube start -p multinode-685000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-685000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (31.647625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-685000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-685000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.711ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-685000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-685000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-685000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (32.116125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-685000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-685000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-685000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-685000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (32.064333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status --output json --alsologtostderr: exit status 7 (32.334583ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-685000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:40:58.352697    8976 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:40:58.352827    8976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.352830    8976 out.go:304] Setting ErrFile to fd 2...
	I0318 13:40:58.352832    8976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.352978    8976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:40:58.353096    8976 out.go:298] Setting JSON to true
	I0318 13:40:58.353108    8976 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:40:58.353171    8976 notify.go:220] Checking for updates...
	I0318 13:40:58.353297    8976 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:40:58.353304    8976 status.go:255] checking status of multinode-685000 ...
	I0318 13:40:58.353517    8976 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:40:58.353521    8976 status.go:343] host is not running, skipping remaining checks
	I0318 13:40:58.353523    8976 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-685000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (32.389375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 node stop m03: exit status 85 (49.102208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-685000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status: exit status 7 (31.939625ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr: exit status 7 (32.117125ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:40:58.498962    8984 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:40:58.499103    8984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.499106    8984 out.go:304] Setting ErrFile to fd 2...
	I0318 13:40:58.499108    8984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.499256    8984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:40:58.499378    8984 out.go:298] Setting JSON to false
	I0318 13:40:58.499390    8984 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:40:58.499452    8984 notify.go:220] Checking for updates...
	I0318 13:40:58.499627    8984 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:40:58.499633    8984 status.go:255] checking status of multinode-685000 ...
	I0318 13:40:58.499834    8984 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:40:58.499838    8984 status.go:343] host is not running, skipping remaining checks
	I0318 13:40:58.499840    8984 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr": multinode-685000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (32.358458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.060583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:40:58.563673    8988 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:40:58.564070    8988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.564074    8988 out.go:304] Setting ErrFile to fd 2...
	I0318 13:40:58.564076    8988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.564243    8988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:40:58.564451    8988 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:40:58.564639    8988 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:40:58.568602    8988 out.go:177] 
	W0318 13:40:58.571713    8988 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0318 13:40:58.571718    8988 out.go:239] * 
	* 
	W0318 13:40:58.573532    8988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:40:58.577718    8988 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0318 13:40:58.563673    8988 out.go:291] Setting OutFile to fd 1 ...
I0318 13:40:58.564070    8988 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:40:58.564074    8988 out.go:304] Setting ErrFile to fd 2...
I0318 13:40:58.564076    8988 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:40:58.564243    8988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
I0318 13:40:58.564451    8988 mustload.go:65] Loading cluster: multinode-685000
I0318 13:40:58.564639    8988 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 13:40:58.568602    8988 out.go:177] 
W0318 13:40:58.571713    8988 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0318 13:40:58.571718    8988 out.go:239] * 
* 
W0318 13:40:58.573532    8988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 13:40:58.577718    8988 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-685000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr: exit status 7 (31.17125ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:40:58.612122    8990 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:40:58.612263    8990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.612269    8990 out.go:304] Setting ErrFile to fd 2...
	I0318 13:40:58.612272    8990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:58.612399    8990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:40:58.612514    8990 out.go:298] Setting JSON to false
	I0318 13:40:58.612525    8990 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:40:58.612583    8990 notify.go:220] Checking for updates...
	I0318 13:40:58.612718    8990 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:40:58.612727    8990 status.go:255] checking status of multinode-685000 ...
	I0318 13:40:58.612949    8990 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:40:58.612953    8990 status.go:343] host is not running, skipping remaining checks
	I0318 13:40:58.612955    8990 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr: exit status 7 (78.949541ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:40:59.503140    8992 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:40:59.503350    8992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:59.503354    8992 out.go:304] Setting ErrFile to fd 2...
	I0318 13:40:59.503357    8992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:59.503517    8992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:40:59.503689    8992 out.go:298] Setting JSON to false
	I0318 13:40:59.503704    8992 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:40:59.503729    8992 notify.go:220] Checking for updates...
	I0318 13:40:59.503972    8992 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:40:59.503981    8992 status.go:255] checking status of multinode-685000 ...
	I0318 13:40:59.504249    8992 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:40:59.504254    8992 status.go:343] host is not running, skipping remaining checks
	I0318 13:40:59.504257    8992 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr: exit status 7 (77.881167ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:01.109029    8998 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:01.109222    8998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:01.109227    8998 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:01.109229    8998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:01.109394    8998 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:01.109535    8998 out.go:298] Setting JSON to false
	I0318 13:41:01.109549    8998 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:41:01.109582    8998 notify.go:220] Checking for updates...
	I0318 13:41:01.109785    8998 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:01.109794    8998 status.go:255] checking status of multinode-685000 ...
	I0318 13:41:01.110078    8998 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:41:01.110083    8998 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:01.110086    8998 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr: exit status 7 (76.882667ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:04.043879    9002 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:04.044039    9002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:04.044045    9002 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:04.044048    9002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:04.044200    9002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:04.044359    9002 out.go:298] Setting JSON to false
	I0318 13:41:04.044384    9002 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:41:04.044419    9002 notify.go:220] Checking for updates...
	I0318 13:41:04.044638    9002 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:04.044646    9002 status.go:255] checking status of multinode-685000 ...
	I0318 13:41:04.044905    9002 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:41:04.044910    9002 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:04.044913    9002 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr: exit status 7 (75.98125ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:07.587634    9006 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:07.587830    9006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:07.587834    9006 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:07.587837    9006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:07.588037    9006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:07.588188    9006 out.go:298] Setting JSON to false
	I0318 13:41:07.588212    9006 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:41:07.588251    9006 notify.go:220] Checking for updates...
	I0318 13:41:07.588475    9006 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:07.588484    9006 status.go:255] checking status of multinode-685000 ...
	I0318 13:41:07.588754    9006 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:41:07.588759    9006 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:07.588762    9006 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr: exit status 7 (74.0985ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:14.647891    9011 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:14.648101    9011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:14.648106    9011 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:14.648109    9011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:14.648286    9011 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:14.648449    9011 out.go:298] Setting JSON to false
	I0318 13:41:14.648465    9011 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:41:14.648489    9011 notify.go:220] Checking for updates...
	I0318 13:41:14.648696    9011 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:14.648703    9011 status.go:255] checking status of multinode-685000 ...
	I0318 13:41:14.648952    9011 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:41:14.648957    9011 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:14.648960    9011 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr: exit status 7 (76.441834ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:21.955545    9021 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:21.955764    9021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:21.955768    9021 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:21.955772    9021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:21.955950    9021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:21.956094    9021 out.go:298] Setting JSON to false
	I0318 13:41:21.956109    9021 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:41:21.956148    9021 notify.go:220] Checking for updates...
	I0318 13:41:21.956349    9021 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:21.956358    9021 status.go:255] checking status of multinode-685000 ...
	I0318 13:41:21.956662    9021 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:41:21.956667    9021 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:21.956670    9021 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr: exit status 7 (74.265333ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:29.656147    9025 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:29.656340    9025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:29.656345    9025 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:29.656348    9025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:29.656513    9025 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:29.656689    9025 out.go:298] Setting JSON to false
	I0318 13:41:29.656705    9025 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:41:29.656733    9025 notify.go:220] Checking for updates...
	I0318 13:41:29.656984    9025 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:29.656993    9025 status.go:255] checking status of multinode-685000 ...
	I0318 13:41:29.657265    9025 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:41:29.657270    9025 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:29.657273    9025 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr: exit status 7 (75.99325ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:46.596312    9034 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:46.596494    9034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:46.596498    9034 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:46.596501    9034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:46.596665    9034 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:46.596819    9034 out.go:298] Setting JSON to false
	I0318 13:41:46.596834    9034 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:41:46.596868    9034 notify.go:220] Checking for updates...
	I0318 13:41:46.597072    9034 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:46.597079    9034 status.go:255] checking status of multinode-685000 ...
	I0318 13:41:46.597337    9034 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:41:46.597343    9034 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:46.597346    9034 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-685000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (35.035667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (48.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-685000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-685000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-685000: (3.195881167s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-685000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-685000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.230937375s)

                                                
                                                
-- stdout --
	* [multinode-685000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-685000" primary control-plane node in "multinode-685000" cluster
	* Restarting existing qemu2 VM for "multinode-685000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-685000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:49.928990    9058 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:49.929150    9058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:49.929155    9058 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:49.929158    9058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:49.929337    9058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:49.930692    9058 out.go:298] Setting JSON to false
	I0318 13:41:49.950985    9058 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6081,"bootTime":1710788428,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:41:49.951077    9058 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:41:49.955695    9058 out.go:177] * [multinode-685000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:41:49.963756    9058 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:41:49.963781    9058 notify.go:220] Checking for updates...
	I0318 13:41:49.971634    9058 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:41:49.975688    9058 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:41:49.978649    9058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:41:49.981657    9058 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:41:49.984669    9058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:41:49.986534    9058 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:49.986602    9058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:41:49.991624    9058 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:41:49.998460    9058 start.go:297] selected driver: qemu2
	I0318 13:41:49.998468    9058 start.go:901] validating driver "qemu2" against &{Name:multinode-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:41:49.998550    9058 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:41:50.001135    9058 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:41:50.001181    9058 cni.go:84] Creating CNI manager for ""
	I0318 13:41:50.001186    9058 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 13:41:50.001249    9058 start.go:340] cluster config:
	{Name:multinode-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-685000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:41:50.006015    9058 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:41:50.013627    9058 out.go:177] * Starting "multinode-685000" primary control-plane node in "multinode-685000" cluster
	I0318 13:41:50.017683    9058 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:41:50.017700    9058 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:41:50.017710    9058 cache.go:56] Caching tarball of preloaded images
	I0318 13:41:50.017793    9058 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:41:50.017799    9058 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:41:50.017876    9058 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/multinode-685000/config.json ...
	I0318 13:41:50.018345    9058 start.go:360] acquireMachinesLock for multinode-685000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:41:50.018385    9058 start.go:364] duration metric: took 31.041µs to acquireMachinesLock for "multinode-685000"
	I0318 13:41:50.018397    9058 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:41:50.018401    9058 fix.go:54] fixHost starting: 
	I0318 13:41:50.018532    9058 fix.go:112] recreateIfNeeded on multinode-685000: state=Stopped err=<nil>
	W0318 13:41:50.018542    9058 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:41:50.021692    9058 out.go:177] * Restarting existing qemu2 VM for "multinode-685000" ...
	I0318 13:41:50.029650    9058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:6b:88:37:29:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:41:50.031870    9058 main.go:141] libmachine: STDOUT: 
	I0318 13:41:50.031899    9058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:41:50.031940    9058 fix.go:56] duration metric: took 13.537333ms for fixHost
	I0318 13:41:50.031945    9058 start.go:83] releasing machines lock for "multinode-685000", held for 13.554375ms
	W0318 13:41:50.031956    9058 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:41:50.031999    9058 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:41:50.032005    9058 start.go:728] Will try again in 5 seconds ...
	I0318 13:41:55.033799    9058 start.go:360] acquireMachinesLock for multinode-685000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:41:55.034207    9058 start.go:364] duration metric: took 311.084µs to acquireMachinesLock for "multinode-685000"
	I0318 13:41:55.034337    9058 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:41:55.034355    9058 fix.go:54] fixHost starting: 
	I0318 13:41:55.035055    9058 fix.go:112] recreateIfNeeded on multinode-685000: state=Stopped err=<nil>
	W0318 13:41:55.035080    9058 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:41:55.042600    9058 out.go:177] * Restarting existing qemu2 VM for "multinode-685000" ...
	I0318 13:41:55.046820    9058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:6b:88:37:29:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:41:55.056422    9058 main.go:141] libmachine: STDOUT: 
	I0318 13:41:55.056495    9058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:41:55.056576    9058 fix.go:56] duration metric: took 22.217125ms for fixHost
	I0318 13:41:55.056596    9058 start.go:83] releasing machines lock for "multinode-685000", held for 22.36875ms
	W0318 13:41:55.056819    9058 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-685000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-685000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:41:55.064579    9058 out.go:177] 
	W0318 13:41:55.067627    9058 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:41:55.067682    9058 out.go:239] * 
	* 
	W0318 13:41:55.070224    9058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:41:55.078495    9058 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-685000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-685000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (35.215833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 node delete m03: exit status 83 (41.98275ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-685000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-685000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-685000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr: exit status 7 (32.670417ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:55.273162    9075 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:55.273320    9075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:55.273323    9075 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:55.273326    9075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:55.273444    9075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:55.273559    9075 out.go:298] Setting JSON to false
	I0318 13:41:55.273570    9075 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:41:55.273638    9075 notify.go:220] Checking for updates...
	I0318 13:41:55.273758    9075 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:55.273765    9075 status.go:255] checking status of multinode-685000 ...
	I0318 13:41:55.273983    9075 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:41:55.273987    9075 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:55.273990    9075 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (32.181375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-685000 stop: (3.322512375s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status: exit status 7 (68.027625ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr: exit status 7 (33.7185ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:58.730813    9100 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:58.730956    9100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:58.730959    9100 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:58.730962    9100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:58.731084    9100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:58.731195    9100 out.go:298] Setting JSON to false
	I0318 13:41:58.731213    9100 mustload.go:65] Loading cluster: multinode-685000
	I0318 13:41:58.731254    9100 notify.go:220] Checking for updates...
	I0318 13:41:58.731430    9100 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:58.731437    9100 status.go:255] checking status of multinode-685000 ...
	I0318 13:41:58.731636    9100 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0318 13:41:58.731639    9100 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:58.731641    9100 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr": multinode-685000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-685000 status --alsologtostderr": multinode-685000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (32.379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-685000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-685000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183682208s)

                                                
                                                
-- stdout --
	* [multinode-685000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-685000" primary control-plane node in "multinode-685000" cluster
	* Restarting existing qemu2 VM for "multinode-685000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-685000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:58.795576    9104 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:58.795707    9104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:58.795709    9104 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:58.795712    9104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:58.795832    9104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:41:58.796785    9104 out.go:298] Setting JSON to false
	I0318 13:41:58.812965    9104 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6090,"bootTime":1710788428,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:41:58.813031    9104 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:41:58.817755    9104 out.go:177] * [multinode-685000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:41:58.824713    9104 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:41:58.827759    9104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:41:58.824744    9104 notify.go:220] Checking for updates...
	I0318 13:41:58.830803    9104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:41:58.833701    9104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:41:58.836736    9104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:41:58.839604    9104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:41:58.842940    9104 config.go:182] Loaded profile config "multinode-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:41:58.843191    9104 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:41:58.846690    9104 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:41:58.853713    9104 start.go:297] selected driver: qemu2
	I0318 13:41:58.853719    9104 start.go:901] validating driver "qemu2" against &{Name:multinode-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:41:58.853784    9104 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:41:58.856045    9104 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:41:58.856090    9104 cni.go:84] Creating CNI manager for ""
	I0318 13:41:58.856095    9104 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 13:41:58.856140    9104 start.go:340] cluster config:
	{Name:multinode-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-685000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:41:58.860460    9104 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:41:58.868678    9104 out.go:177] * Starting "multinode-685000" primary control-plane node in "multinode-685000" cluster
	I0318 13:41:58.872701    9104 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:41:58.872717    9104 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:41:58.872725    9104 cache.go:56] Caching tarball of preloaded images
	I0318 13:41:58.872782    9104 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:41:58.872788    9104 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:41:58.872847    9104 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/multinode-685000/config.json ...
	I0318 13:41:58.873306    9104 start.go:360] acquireMachinesLock for multinode-685000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:41:58.873335    9104 start.go:364] duration metric: took 22.958µs to acquireMachinesLock for "multinode-685000"
	I0318 13:41:58.873345    9104 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:41:58.873349    9104 fix.go:54] fixHost starting: 
	I0318 13:41:58.873471    9104 fix.go:112] recreateIfNeeded on multinode-685000: state=Stopped err=<nil>
	W0318 13:41:58.873484    9104 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:41:58.877705    9104 out.go:177] * Restarting existing qemu2 VM for "multinode-685000" ...
	I0318 13:41:58.885666    9104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:6b:88:37:29:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:41:58.887649    9104 main.go:141] libmachine: STDOUT: 
	I0318 13:41:58.887677    9104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:41:58.887707    9104 fix.go:56] duration metric: took 14.357875ms for fixHost
	I0318 13:41:58.887711    9104 start.go:83] releasing machines lock for "multinode-685000", held for 14.372125ms
	W0318 13:41:58.887719    9104 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:41:58.887745    9104 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:41:58.887749    9104 start.go:728] Will try again in 5 seconds ...
	I0318 13:42:03.889895    9104 start.go:360] acquireMachinesLock for multinode-685000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:42:03.890337    9104 start.go:364] duration metric: took 351µs to acquireMachinesLock for "multinode-685000"
	I0318 13:42:03.890583    9104 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:42:03.890603    9104 fix.go:54] fixHost starting: 
	I0318 13:42:03.891324    9104 fix.go:112] recreateIfNeeded on multinode-685000: state=Stopped err=<nil>
	W0318 13:42:03.891352    9104 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:42:03.894773    9104 out.go:177] * Restarting existing qemu2 VM for "multinode-685000" ...
	I0318 13:42:03.904004    9104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:6b:88:37:29:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/multinode-685000/disk.qcow2
	I0318 13:42:03.913820    9104 main.go:141] libmachine: STDOUT: 
	I0318 13:42:03.913893    9104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:42:03.913984    9104 fix.go:56] duration metric: took 23.38275ms for fixHost
	I0318 13:42:03.914003    9104 start.go:83] releasing machines lock for "multinode-685000", held for 23.642333ms
	W0318 13:42:03.914186    9104 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-685000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-685000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:42:03.922822    9104 out.go:177] 
	W0318 13:42:03.925799    9104 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:42:03.925824    9104 out.go:239] * 
	* 
	W0318 13:42:03.928296    9104 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:42:03.935781    9104 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-685000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (72.540125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-685000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-685000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-685000-m01 --driver=qemu2 : exit status 80 (9.981731125s)

                                                
                                                
-- stdout --
	* [multinode-685000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-685000-m01" primary control-plane node in "multinode-685000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-685000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-685000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-685000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-685000-m02 --driver=qemu2 : exit status 80 (9.841899875s)

                                                
                                                
-- stdout --
	* [multinode-685000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-685000-m02" primary control-plane node in "multinode-685000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-685000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-685000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-685000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-685000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-685000: exit status 83 (81.95025ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-685000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-685000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-685000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-685000 -n multinode-685000: exit status 7 (31.778583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-685000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                    
x
+
TestPreload (10.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-211000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-211000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.92864475s)

                                                
                                                
-- stdout --
	* [test-preload-211000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-211000" primary control-plane node in "test-preload-211000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-211000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:42:24.264194    9172 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:42:24.264314    9172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:42:24.264317    9172 out.go:304] Setting ErrFile to fd 2...
	I0318 13:42:24.264320    9172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:42:24.264450    9172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:42:24.265524    9172 out.go:298] Setting JSON to false
	I0318 13:42:24.281221    9172 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6116,"bootTime":1710788428,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:42:24.281292    9172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:42:24.287081    9172 out.go:177] * [test-preload-211000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:42:24.294206    9172 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:42:24.299109    9172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:42:24.294251    9172 notify.go:220] Checking for updates...
	I0318 13:42:24.305173    9172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:42:24.309144    9172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:42:24.312141    9172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:42:24.315149    9172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:42:24.318437    9172 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:42:24.318490    9172 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:42:24.323114    9172 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:42:24.330066    9172 start.go:297] selected driver: qemu2
	I0318 13:42:24.330072    9172 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:42:24.330077    9172 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:42:24.332325    9172 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:42:24.335129    9172 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:42:24.338262    9172 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:42:24.338306    9172 cni.go:84] Creating CNI manager for ""
	I0318 13:42:24.338315    9172 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:42:24.338319    9172 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:42:24.338355    9172 start.go:340] cluster config:
	{Name:test-preload-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:42:24.342811    9172 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:42:24.350113    9172 out.go:177] * Starting "test-preload-211000" primary control-plane node in "test-preload-211000" cluster
	I0318 13:42:24.353035    9172 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0318 13:42:24.353118    9172 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/test-preload-211000/config.json ...
	I0318 13:42:24.353138    9172 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/test-preload-211000/config.json: {Name:mkf0e05ec48689ccfd6f1d542a4ee120b6a25e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:42:24.353176    9172 cache.go:107] acquiring lock: {Name:mk189d694ac9f9bf1008521ce7d7ba734fe35b8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:42:24.353192    9172 cache.go:107] acquiring lock: {Name:mk44ad7f66f5712d59df5e5cda9011b907a68782 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:42:24.353184    9172 cache.go:107] acquiring lock: {Name:mk45255dc3e9cf2b2b284af749541a09a8c91ca7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:42:24.353227    9172 cache.go:107] acquiring lock: {Name:mk8c3a3d1f3446cf4983b3fe5266dc7b9cb58e66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:42:24.353427    9172 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:42:24.353506    9172 start.go:360] acquireMachinesLock for test-preload-211000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:42:24.353506    9172 cache.go:107] acquiring lock: {Name:mk91fcb7dc42c63b47a869a1d65521f9ddd8962a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:42:24.353437    9172 cache.go:107] acquiring lock: {Name:mk63ba5d1edcb5eaf24d231fa0f011d3e42823ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:42:24.353542    9172 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "test-preload-211000"
	I0318 13:42:24.353556    9172 start.go:93] Provisioning new machine with config: &{Name:test-preload-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:42:24.353589    9172 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:42:24.353578    9172 cache.go:107] acquiring lock: {Name:mkcf48575a10cdcc92f3c506eb6221868120c65f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:42:24.353604    9172 cache.go:107] acquiring lock: {Name:mkec11fa5d0ee7f35aa36443470976d6dc0e17be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:42:24.353638    9172 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:42:24.353430    9172 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 13:42:24.353626    9172 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 13:42:24.361089    9172 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:42:24.353650    9172 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 13:42:24.353692    9172 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 13:42:24.353652    9172 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 13:42:24.357466    9172 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:42:24.364187    9172 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 13:42:24.364235    9172 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:42:24.364722    9172 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 13:42:24.364778    9172 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:42:24.367767    9172 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 13:42:24.367817    9172 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 13:42:24.367879    9172 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 13:42:24.367909    9172 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:42:24.379093    9172 start.go:159] libmachine.API.Create for "test-preload-211000" (driver="qemu2")
	I0318 13:42:24.379109    9172 client.go:168] LocalClient.Create starting
	I0318 13:42:24.379226    9172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:42:24.379255    9172 main.go:141] libmachine: Decoding PEM data...
	I0318 13:42:24.379265    9172 main.go:141] libmachine: Parsing certificate...
	I0318 13:42:24.379318    9172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:42:24.379341    9172 main.go:141] libmachine: Decoding PEM data...
	I0318 13:42:24.379348    9172 main.go:141] libmachine: Parsing certificate...
	I0318 13:42:24.379741    9172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:42:24.528672    9172 main.go:141] libmachine: Creating SSH key...
	I0318 13:42:24.614080    9172 main.go:141] libmachine: Creating Disk image...
	I0318 13:42:24.614095    9172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:42:24.614303    9172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2
	I0318 13:42:24.627770    9172 main.go:141] libmachine: STDOUT: 
	I0318 13:42:24.627791    9172 main.go:141] libmachine: STDERR: 
	I0318 13:42:24.627837    9172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2 +20000M
	I0318 13:42:24.639713    9172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:42:24.639733    9172 main.go:141] libmachine: STDERR: 
	I0318 13:42:24.639754    9172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2
	I0318 13:42:24.639758    9172 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:42:24.639785    9172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:e2:92:92:3e:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2
	I0318 13:42:24.641895    9172 main.go:141] libmachine: STDOUT: 
	I0318 13:42:24.641923    9172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:42:24.641945    9172 client.go:171] duration metric: took 262.833166ms to LocalClient.Create
	W0318 13:42:26.299247    9172 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 13:42:26.299356    9172 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 13:42:26.345146    9172 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 13:42:26.386485    9172 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 13:42:26.409725    9172 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0318 13:42:26.418908    9172 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0318 13:42:26.430512    9172 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0318 13:42:26.450521    9172 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0318 13:42:26.503388    9172 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0318 13:42:26.503432    9172 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.150037958s
	I0318 13:42:26.503462    9172 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0318 13:42:26.642254    9172 start.go:128] duration metric: took 2.288657125s to createHost
	I0318 13:42:26.642314    9172 start.go:83] releasing machines lock for "test-preload-211000", held for 2.288774875s
	W0318 13:42:26.642360    9172 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:42:26.653380    9172 out.go:177] * Deleting "test-preload-211000" in qemu2 ...
	W0318 13:42:26.684563    9172 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:42:26.684603    9172 start.go:728] Will try again in 5 seconds ...
	W0318 13:42:27.016381    9172 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 13:42:27.016520    9172 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:42:27.364335    9172 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0318 13:42:27.364394    9172 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.010904417s
	I0318 13:42:27.364430    9172 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0318 13:42:28.911110    9172 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0318 13:42:28.911157    9172 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.557737s
	I0318 13:42:28.911181    9172 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0318 13:42:28.918438    9172 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 13:42:28.918479    9172 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.565325584s
	I0318 13:42:28.918503    9172 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 13:42:30.180793    9172 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0318 13:42:30.180839    9172 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.827639625s
	I0318 13:42:30.180884    9172 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0318 13:42:30.876034    9172 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0318 13:42:30.876083    9172 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.522928542s
	I0318 13:42:30.876118    9172 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0318 13:42:30.879418    9172 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0318 13:42:30.879450    9172 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.526314333s
	I0318 13:42:30.879472    9172 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0318 13:42:31.684978    9172 start.go:360] acquireMachinesLock for test-preload-211000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:42:31.685337    9172 start.go:364] duration metric: took 279.75µs to acquireMachinesLock for "test-preload-211000"
	I0318 13:42:31.685452    9172 start.go:93] Provisioning new machine with config: &{Name:test-preload-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:42:31.685701    9172 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:42:31.692464    9172 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:42:31.742131    9172 start.go:159] libmachine.API.Create for "test-preload-211000" (driver="qemu2")
	I0318 13:42:31.742198    9172 client.go:168] LocalClient.Create starting
	I0318 13:42:31.742375    9172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:42:31.742457    9172 main.go:141] libmachine: Decoding PEM data...
	I0318 13:42:31.742477    9172 main.go:141] libmachine: Parsing certificate...
	I0318 13:42:31.742550    9172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:42:31.742605    9172 main.go:141] libmachine: Decoding PEM data...
	I0318 13:42:31.742619    9172 main.go:141] libmachine: Parsing certificate...
	I0318 13:42:31.743156    9172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:42:31.907051    9172 main.go:141] libmachine: Creating SSH key...
	I0318 13:42:32.092723    9172 main.go:141] libmachine: Creating Disk image...
	I0318 13:42:32.092731    9172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:42:32.092909    9172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2
	I0318 13:42:32.105840    9172 main.go:141] libmachine: STDOUT: 
	I0318 13:42:32.105863    9172 main.go:141] libmachine: STDERR: 
	I0318 13:42:32.105919    9172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2 +20000M
	I0318 13:42:32.117073    9172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:42:32.117106    9172 main.go:141] libmachine: STDERR: 
	I0318 13:42:32.117135    9172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2
	I0318 13:42:32.117140    9172 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:42:32.117186    9172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:08:d9:d5:66:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/test-preload-211000/disk.qcow2
	I0318 13:42:32.119082    9172 main.go:141] libmachine: STDOUT: 
	I0318 13:42:32.119097    9172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:42:32.119108    9172 client.go:171] duration metric: took 376.892083ms to LocalClient.Create
	I0318 13:42:33.351457    9172 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0318 13:42:33.351503    9172 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.998174167s
	I0318 13:42:33.351527    9172 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0318 13:42:33.351577    9172 cache.go:87] Successfully saved all images to host disk.
	I0318 13:42:34.119686    9172 start.go:128] duration metric: took 2.433970167s to createHost
	I0318 13:42:34.119761    9172 start.go:83] releasing machines lock for "test-preload-211000", held for 2.434411083s
	W0318 13:42:34.120072    9172 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-211000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-211000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:42:34.131561    9172 out.go:177] 
	W0318 13:42:34.135649    9172 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:42:34.135675    9172 out.go:239] * 
	* 
	W0318 13:42:34.138427    9172 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:42:34.146600    9172 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-211000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-18 13:42:34.166065 -0700 PDT m=+835.783404043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-211000 -n test-preload-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-211000 -n test-preload-211000: exit status 7 (67.651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-211000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-211000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-211000
--- FAIL: TestPreload (10.10s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-850000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-850000 --memory=2048 --driver=qemu2 : exit status 80 (9.80609825s)

                                                
                                                
-- stdout --
	* [scheduled-stop-850000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-850000" primary control-plane node in "scheduled-stop-850000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-850000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-850000" primary control-plane node in "scheduled-stop-850000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-18 13:42:44.141646 -0700 PDT m=+845.759036293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-850000 -n scheduled-stop-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-850000 -n scheduled-stop-850000: exit status 7 (71.084667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-850000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-850000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-850000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (16.68s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2106628843 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-932000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-932000 --memory=2600 --driver=qemu2 : exit status 80 (9.97866225s)

                                                
                                                
-- stdout --
	* [skaffold-932000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-932000" primary control-plane node in "skaffold-932000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-932000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-932000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-932000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-932000" primary control-plane node in "skaffold-932000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-932000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-932000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-18 13:43:00.831652 -0700 PDT m=+862.449128210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-932000 -n skaffold-932000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-932000 -n skaffold-932000: exit status 7 (62.804084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-932000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-932000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-932000
--- FAIL: TestSkaffold (16.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (639.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1396297201 start -p running-upgrade-647000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1396297201 start -p running-upgrade-647000 --memory=2200 --vm-driver=qemu2 : (1m20.278068875s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-647000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-647000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.140027208s)

                                                
                                                
-- stdout --
	* [running-upgrade-647000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-647000" primary control-plane node in "running-upgrade-647000" cluster
	* Updating the running qemu2 "running-upgrade-647000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:45:06.590689    9587 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:45:06.590812    9587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:06.590815    9587 out.go:304] Setting ErrFile to fd 2...
	I0318 13:45:06.590818    9587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:06.590932    9587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:45:06.591840    9587 out.go:298] Setting JSON to false
	I0318 13:45:06.609210    9587 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6278,"bootTime":1710788428,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:45:06.609276    9587 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:45:06.614668    9587 out.go:177] * [running-upgrade-647000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:45:06.622585    9587 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:45:06.622652    9587 notify.go:220] Checking for updates...
	I0318 13:45:06.629495    9587 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:45:06.633615    9587 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:45:06.636623    9587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:45:06.639621    9587 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:45:06.642644    9587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:45:06.645905    9587 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:45:06.649576    9587 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 13:45:06.652604    9587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:45:06.655652    9587 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:45:06.662610    9587 start.go:297] selected driver: qemu2
	I0318 13:45:06.662615    9587 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51166 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 13:45:06.662661    9587 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:45:06.665089    9587 cni.go:84] Creating CNI manager for ""
	I0318 13:45:06.665105    9587 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:45:06.665126    9587 start.go:340] cluster config:
	{Name:running-upgrade-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51166 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 13:45:06.665173    9587 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:06.673583    9587 out.go:177] * Starting "running-upgrade-647000" primary control-plane node in "running-upgrade-647000" cluster
	I0318 13:45:06.677607    9587 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 13:45:06.677619    9587 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 13:45:06.677624    9587 cache.go:56] Caching tarball of preloaded images
	I0318 13:45:06.677677    9587 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:45:06.677683    9587 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 13:45:06.677728    9587 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/config.json ...
	I0318 13:45:06.678057    9587 start.go:360] acquireMachinesLock for running-upgrade-647000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:45:06.678081    9587 start.go:364] duration metric: took 19.125µs to acquireMachinesLock for "running-upgrade-647000"
	I0318 13:45:06.678091    9587 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:45:06.678095    9587 fix.go:54] fixHost starting: 
	I0318 13:45:06.678779    9587 fix.go:112] recreateIfNeeded on running-upgrade-647000: state=Running err=<nil>
	W0318 13:45:06.678787    9587 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:45:06.682685    9587 out.go:177] * Updating the running qemu2 "running-upgrade-647000" VM ...
	I0318 13:45:06.690604    9587 machine.go:94] provisionDockerMachine start ...
	I0318 13:45:06.690634    9587 main.go:141] libmachine: Using SSH client type: native
	I0318 13:45:06.690743    9587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048ddbf0] 0x1048e0450 <nil>  [] 0s} localhost 51134 <nil> <nil>}
	I0318 13:45:06.690748    9587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:45:06.757868    9587 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-647000
	
	I0318 13:45:06.757881    9587 buildroot.go:166] provisioning hostname "running-upgrade-647000"
	I0318 13:45:06.757934    9587 main.go:141] libmachine: Using SSH client type: native
	I0318 13:45:06.758033    9587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048ddbf0] 0x1048e0450 <nil>  [] 0s} localhost 51134 <nil> <nil>}
	I0318 13:45:06.758038    9587 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-647000 && echo "running-upgrade-647000" | sudo tee /etc/hostname
	I0318 13:45:06.829436    9587 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-647000
	
	I0318 13:45:06.829493    9587 main.go:141] libmachine: Using SSH client type: native
	I0318 13:45:06.829604    9587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048ddbf0] 0x1048e0450 <nil>  [] 0s} localhost 51134 <nil> <nil>}
	I0318 13:45:06.829613    9587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-647000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-647000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-647000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:45:06.894989    9587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:45:06.895000    9587 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18421-6777/.minikube CaCertPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18421-6777/.minikube}
	I0318 13:45:06.895006    9587 buildroot.go:174] setting up certificates
	I0318 13:45:06.895011    9587 provision.go:84] configureAuth start
	I0318 13:45:06.895023    9587 provision.go:143] copyHostCerts
	I0318 13:45:06.895090    9587 exec_runner.go:144] found /Users/jenkins/minikube-integration/18421-6777/.minikube/cert.pem, removing ...
	I0318 13:45:06.895096    9587 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18421-6777/.minikube/cert.pem
	I0318 13:45:06.895217    9587 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18421-6777/.minikube/cert.pem (1123 bytes)
	I0318 13:45:06.895407    9587 exec_runner.go:144] found /Users/jenkins/minikube-integration/18421-6777/.minikube/key.pem, removing ...
	I0318 13:45:06.895410    9587 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18421-6777/.minikube/key.pem
	I0318 13:45:06.895449    9587 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18421-6777/.minikube/key.pem (1679 bytes)
	I0318 13:45:06.895559    9587 exec_runner.go:144] found /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.pem, removing ...
	I0318 13:45:06.895562    9587 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.pem
	I0318 13:45:06.895617    9587 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.pem (1078 bytes)
	I0318 13:45:06.895723    9587 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-647000 san=[127.0.0.1 localhost minikube running-upgrade-647000]
	I0318 13:45:06.969939    9587 provision.go:177] copyRemoteCerts
	I0318 13:45:06.969977    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:45:06.969987    9587 sshutil.go:53] new ssh client: &{IP:localhost Port:51134 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0318 13:45:07.006046    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:45:07.012783    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:45:07.019813    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:45:07.026392    9587 provision.go:87] duration metric: took 131.370791ms to configureAuth
	I0318 13:45:07.026400    9587 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:45:07.026501    9587 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:45:07.026534    9587 main.go:141] libmachine: Using SSH client type: native
	I0318 13:45:07.026620    9587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048ddbf0] 0x1048e0450 <nil>  [] 0s} localhost 51134 <nil> <nil>}
	I0318 13:45:07.026624    9587 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 13:45:07.092026    9587 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 13:45:07.092033    9587 buildroot.go:70] root file system type: tmpfs
	I0318 13:45:07.092101    9587 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 13:45:07.092151    9587 main.go:141] libmachine: Using SSH client type: native
	I0318 13:45:07.092251    9587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048ddbf0] 0x1048e0450 <nil>  [] 0s} localhost 51134 <nil> <nil>}
	I0318 13:45:07.092284    9587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 13:45:07.162054    9587 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 13:45:07.162107    9587 main.go:141] libmachine: Using SSH client type: native
	I0318 13:45:07.162216    9587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048ddbf0] 0x1048e0450 <nil>  [] 0s} localhost 51134 <nil> <nil>}
	I0318 13:45:07.162224    9587 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 13:45:07.228878    9587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:45:07.228889    9587 machine.go:97] duration metric: took 538.283542ms to provisionDockerMachine
	I0318 13:45:07.228894    9587 start.go:293] postStartSetup for "running-upgrade-647000" (driver="qemu2")
	I0318 13:45:07.228901    9587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:45:07.228954    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:45:07.228963    9587 sshutil.go:53] new ssh client: &{IP:localhost Port:51134 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0318 13:45:07.263397    9587 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:45:07.264574    9587 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 13:45:07.264580    9587 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18421-6777/.minikube/addons for local assets ...
	I0318 13:45:07.264642    9587 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18421-6777/.minikube/files for local assets ...
	I0318 13:45:07.264722    9587 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem -> 72362.pem in /etc/ssl/certs
	I0318 13:45:07.264806    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:45:07.267362    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem --> /etc/ssl/certs/72362.pem (1708 bytes)
	I0318 13:45:07.273991    9587 start.go:296] duration metric: took 45.091458ms for postStartSetup
	I0318 13:45:07.274002    9587 fix.go:56] duration metric: took 595.910709ms for fixHost
	I0318 13:45:07.274031    9587 main.go:141] libmachine: Using SSH client type: native
	I0318 13:45:07.274123    9587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048ddbf0] 0x1048e0450 <nil>  [] 0s} localhost 51134 <nil> <nil>}
	I0318 13:45:07.274129    9587 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 13:45:07.338457    9587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710794707.242183932
	
	I0318 13:45:07.338465    9587 fix.go:216] guest clock: 1710794707.242183932
	I0318 13:45:07.338470    9587 fix.go:229] Guest: 2024-03-18 13:45:07.242183932 -0700 PDT Remote: 2024-03-18 13:45:07.274003 -0700 PDT m=+0.705620293 (delta=-31.819068ms)
	I0318 13:45:07.338483    9587 fix.go:200] guest clock delta is within tolerance: -31.819068ms
	I0318 13:45:07.338487    9587 start.go:83] releasing machines lock for "running-upgrade-647000", held for 660.404583ms
	I0318 13:45:07.338542    9587 ssh_runner.go:195] Run: cat /version.json
	I0318 13:45:07.338551    9587 sshutil.go:53] new ssh client: &{IP:localhost Port:51134 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0318 13:45:07.338557    9587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:45:07.338578    9587 sshutil.go:53] new ssh client: &{IP:localhost Port:51134 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	W0318 13:45:07.339142    9587 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51134: connect: connection refused
	I0318 13:45:07.339166    9587 retry.go:31] will retry after 310.820984ms: dial tcp [::1]:51134: connect: connection refused
	W0318 13:45:07.691185    9587 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 13:45:07.691282    9587 ssh_runner.go:195] Run: systemctl --version
	I0318 13:45:07.693761    9587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:45:07.696031    9587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:45:07.696067    9587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 13:45:07.699636    9587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 13:45:07.705083    9587 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:45:07.705090    9587 start.go:494] detecting cgroup driver to use...
	I0318 13:45:07.705228    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:45:07.711276    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 13:45:07.714422    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 13:45:07.717554    9587 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 13:45:07.717575    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 13:45:07.722029    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:45:07.725131    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 13:45:07.728501    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:45:07.731754    9587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:45:07.734513    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 13:45:07.737607    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0318 13:45:07.741076    9587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0318 13:45:07.744160    9587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:45:07.746578    9587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:45:07.749470    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:07.840868    9587 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 13:45:07.852413    9587 start.go:494] detecting cgroup driver to use...
	I0318 13:45:07.852489    9587 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 13:45:07.858607    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:45:07.863298    9587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:45:07.872906    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:45:07.877947    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:45:07.882565    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:45:07.887499    9587 ssh_runner.go:195] Run: which cri-dockerd
	I0318 13:45:07.888788    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 13:45:07.891798    9587 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 13:45:07.896671    9587 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 13:45:07.988465    9587 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 13:45:08.076248    9587 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 13:45:08.076308    9587 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 13:45:08.081759    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:08.172762    9587 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:45:25.887953    9587 ssh_runner.go:235] Completed: sudo systemctl restart docker: (17.715263125s)
	I0318 13:45:25.888024    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 13:45:25.895340    9587 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 13:45:25.903270    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:45:25.908941    9587 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 13:45:26.000344    9587 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 13:45:26.062503    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:26.138243    9587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 13:45:26.143902    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:45:26.148663    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:26.209721    9587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 13:45:26.249659    9587 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 13:45:26.249724    9587 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 13:45:26.251829    9587 start.go:562] Will wait 60s for crictl version
	I0318 13:45:26.251884    9587 ssh_runner.go:195] Run: which crictl
	I0318 13:45:26.253389    9587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:45:26.265401    9587 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 13:45:26.265464    9587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:45:26.277686    9587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:45:26.294553    9587 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 13:45:26.294679    9587 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 13:45:26.296164    9587 kubeadm.go:877] updating cluster {Name:running-upgrade-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51166 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 13:45:26.296210    9587 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 13:45:26.296244    9587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:45:26.306525    9587 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 13:45:26.306533    9587 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 13:45:26.306577    9587 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 13:45:26.310074    9587 ssh_runner.go:195] Run: which lz4
	I0318 13:45:26.311352    9587 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 13:45:26.312547    9587 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:45:26.312557    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 13:45:27.056967    9587 docker.go:649] duration metric: took 745.649333ms to copy over tarball
	I0318 13:45:27.057027    9587 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:45:28.394182    9587 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.3371485s)
	I0318 13:45:28.394208    9587 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:45:28.410137    9587 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 13:45:28.413310    9587 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 13:45:28.418351    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:28.494381    9587 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:45:29.764900    9587 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.270509625s)
	I0318 13:45:29.764992    9587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:45:29.777623    9587 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 13:45:29.777632    9587 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 13:45:29.777637    9587 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:45:29.790269    9587 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:45:29.790271    9587 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:45:29.790400    9587 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 13:45:29.793700    9587 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:45:29.794052    9587 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:45:29.794107    9587 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:45:29.794200    9587 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:45:29.794242    9587 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:45:29.804425    9587 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 13:45:29.804575    9587 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:45:29.804687    9587 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:45:29.804784    9587 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:45:29.805309    9587 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:45:29.805345    9587 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:45:29.805321    9587 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:45:29.805348    9587 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:45:31.725856    9587 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:45:31.758125    9587 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 13:45:31.765320    9587 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 13:45:31.765408    9587 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:45:31.765506    9587 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:45:31.788070    9587 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 13:45:31.788104    9587 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 13:45:31.788169    9587 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 13:45:31.790648    9587 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0318 13:45:31.799920    9587 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 13:45:31.800061    9587 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:45:31.807136    9587 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 13:45:31.807263    9587 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0318 13:45:31.816092    9587 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 13:45:31.816118    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 13:45:31.816216    9587 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 13:45:31.816238    9587 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:45:31.816280    9587 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:45:31.825107    9587 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 13:45:31.825120    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 13:45:31.829982    9587 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 13:45:31.830111    9587 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0318 13:45:31.836413    9587 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:45:31.845054    9587 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:45:31.845949    9587 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 13:45:31.857598    9587 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:45:31.867250    9587 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0318 13:45:31.867289    9587 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 13:45:31.867316    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 13:45:31.867379    9587 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 13:45:31.867398    9587 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:45:31.867436    9587 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:45:31.876377    9587 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 13:45:31.876402    9587 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:45:31.876377    9587 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 13:45:31.876443    9587 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:45:31.876455    9587 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 13:45:31.876474    9587 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:45:31.883381    9587 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 13:45:31.883402    9587 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:45:31.883455    9587 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:45:31.889795    9587 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 13:45:31.914704    9587 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 13:45:31.914736    9587 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 13:45:31.914878    9587 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0318 13:45:31.924866    9587 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 13:45:31.927312    9587 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0318 13:45:31.927327    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0318 13:45:31.934658    9587 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 13:45:31.934674    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 13:45:32.002313    9587 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 13:45:32.127706    9587 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0318 13:45:32.127721    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0318 13:45:32.254178    9587 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0318 13:45:32.377786    9587 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 13:45:32.377967    9587 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:45:32.401245    9587 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 13:45:32.401274    9587 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:45:32.401340    9587 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:45:33.151468    9587 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:45:33.151969    9587 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:45:33.157831    9587 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 13:45:33.157936    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 13:45:33.211746    9587 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:45:33.211763    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 13:45:33.448589    9587 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 13:45:33.448628    9587 cache_images.go:92] duration metric: took 3.670998083s to LoadCachedImages
	W0318 13:45:33.448672    9587 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0318 13:45:33.448681    9587 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 13:45:33.448747    9587 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-647000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:45:33.448804    9587 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 13:45:33.462503    9587 cni.go:84] Creating CNI manager for ""
	I0318 13:45:33.462515    9587 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:45:33.462521    9587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:45:33.462530    9587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-647000 NodeName:running-upgrade-647000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:45:33.462592    9587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-647000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:45:33.462644    9587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 13:45:33.466266    9587 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:45:33.466299    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:45:33.469597    9587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 13:45:33.474770    9587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:45:33.479714    9587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 13:45:33.487035    9587 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 13:45:33.489095    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:33.574882    9587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:45:33.579899    9587 certs.go:68] Setting up /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000 for IP: 10.0.2.15
	I0318 13:45:33.579905    9587 certs.go:194] generating shared ca certs ...
	I0318 13:45:33.579914    9587 certs.go:226] acquiring lock for ca certs: {Name:mkb77ca79ad1917526a647bf0189e0c89f5a836a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:45:33.580121    9587 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.key
	I0318 13:45:33.580153    9587 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/proxy-client-ca.key
	I0318 13:45:33.580162    9587 certs.go:256] generating profile certs ...
	I0318 13:45:33.580215    9587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/client.key
	I0318 13:45:33.580224    9587 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.key.9d1fd0b5
	I0318 13:45:33.580235    9587 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.crt.9d1fd0b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 13:45:33.771389    9587 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.crt.9d1fd0b5 ...
	I0318 13:45:33.771397    9587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.crt.9d1fd0b5: {Name:mk261cb4e7585c5299de83e359855a31b46eb640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:45:33.771667    9587 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.key.9d1fd0b5 ...
	I0318 13:45:33.771672    9587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.key.9d1fd0b5: {Name:mke393efcb89eb5cd4595b8a323c4877a0d57e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:45:33.771787    9587 certs.go:381] copying /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.crt.9d1fd0b5 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.crt
	I0318 13:45:33.772031    9587 certs.go:385] copying /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.key.9d1fd0b5 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.key
	I0318 13:45:33.772196    9587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/proxy-client.key
	I0318 13:45:33.772321    9587 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/7236.pem (1338 bytes)
	W0318 13:45:33.772352    9587 certs.go:480] ignoring /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/7236_empty.pem, impossibly tiny 0 bytes
	I0318 13:45:33.772358    9587 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:45:33.772377    9587 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:45:33.772393    9587 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:45:33.772409    9587 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/key.pem (1679 bytes)
	I0318 13:45:33.772447    9587 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem (1708 bytes)
	I0318 13:45:33.772829    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:45:33.780376    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:45:33.787905    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:45:33.795120    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:45:33.801942    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:45:33.809228    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:45:33.816372    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:45:33.823562    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:45:33.830282    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem --> /usr/share/ca-certificates/72362.pem (1708 bytes)
	I0318 13:45:33.837533    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:45:33.844629    9587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/7236.pem --> /usr/share/ca-certificates/7236.pem (1338 bytes)
	I0318 13:45:33.851124    9587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:45:33.855876    9587 ssh_runner.go:195] Run: openssl version
	I0318 13:45:33.857627    9587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72362.pem && ln -fs /usr/share/ca-certificates/72362.pem /etc/ssl/certs/72362.pem"
	I0318 13:45:33.861062    9587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72362.pem
	I0318 13:45:33.862410    9587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:31 /usr/share/ca-certificates/72362.pem
	I0318 13:45:33.862433    9587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72362.pem
	I0318 13:45:33.864141    9587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:45:33.866735    9587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:45:33.869889    9587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:33.871335    9587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:44 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:33.871354    9587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:33.873100    9587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:45:33.876226    9587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7236.pem && ln -fs /usr/share/ca-certificates/7236.pem /etc/ssl/certs/7236.pem"
	I0318 13:45:33.879028    9587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7236.pem
	I0318 13:45:33.880343    9587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:31 /usr/share/ca-certificates/7236.pem
	I0318 13:45:33.880358    9587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7236.pem
	I0318 13:45:33.882069    9587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7236.pem /etc/ssl/certs/51391683.0"
	I0318 13:45:33.885311    9587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:45:33.886693    9587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:45:33.888364    9587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:45:33.890023    9587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:45:33.891757    9587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:45:33.893596    9587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:45:33.895406    9587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:45:33.897129    9587 kubeadm.go:391] StartCluster: {Name:running-upgrade-647000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51166 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-647000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 13:45:33.897189    9587 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:45:33.907415    9587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:45:33.911090    9587 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:45:33.911096    9587 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:45:33.911098    9587 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:45:33.911123    9587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:45:33.914028    9587 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:45:33.914070    9587 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-647000" does not appear in /Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:45:33.914084    9587 kubeconfig.go:62] /Users/jenkins/minikube-integration/18421-6777/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-647000" cluster setting kubeconfig missing "running-upgrade-647000" context setting]
	I0318 13:45:33.914264    9587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/kubeconfig: {Name:mk6a62990bf9328d54440f15380010f8199a9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:45:33.914883    9587 kapi.go:59] client config for running-upgrade-647000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/client.key", CAFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105bcea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:45:33.915649    9587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:45:33.918369    9587 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-647000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 13:45:33.918375    9587 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:45:33.918411    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:45:33.929616    9587 docker.go:483] Stopping containers: [af77e2853aa8 d0beb2df8c4d 00cfc4402308 465b77eb8d48 84cd5d05ad71 aa2b472eda1e df5b61fd860b da6572443e62 a5d0e75312a0 4f52d8f210c8 70a9184ea53a 2e2b7dc6d3a3 25aa3dfb2f66 7df5db528945]
	I0318 13:45:33.929675    9587 ssh_runner.go:195] Run: docker stop af77e2853aa8 d0beb2df8c4d 00cfc4402308 465b77eb8d48 84cd5d05ad71 aa2b472eda1e df5b61fd860b da6572443e62 a5d0e75312a0 4f52d8f210c8 70a9184ea53a 2e2b7dc6d3a3 25aa3dfb2f66 7df5db528945
	I0318 13:45:33.940925    9587 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:45:34.031875    9587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:45:34.036305    9587 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar 18 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar 18 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 18 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 18 20:44 /etc/kubernetes/scheduler.conf
	
	I0318 13:45:34.036334    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/admin.conf
	I0318 13:45:34.040316    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:45:34.040341    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:45:34.043994    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/kubelet.conf
	I0318 13:45:34.047106    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:45:34.047128    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:45:34.049966    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/controller-manager.conf
	I0318 13:45:34.052964    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:45:34.052989    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:45:34.056049    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/scheduler.conf
	I0318 13:45:34.058736    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:45:34.058766    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:45:34.061272    9587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:45:34.064345    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:45:34.086368    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:45:34.430136    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:45:34.619968    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:45:34.646447    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:45:34.668139    9587 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:45:34.668210    9587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:45:35.170469    9587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:45:35.670289    9587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:45:36.170252    9587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:45:36.174095    9587 api_server.go:72] duration metric: took 1.505965833s to wait for apiserver process to appear ...
	I0318 13:45:36.174102    9587 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:45:36.174127    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:45:41.174443    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:45:41.174568    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:45:46.176168    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:45:46.176208    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:45:51.176535    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:45:51.176601    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:45:56.177003    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:45:56.177053    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:01.177790    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:01.177885    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:06.179074    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:06.179171    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:11.180893    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:11.180960    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:16.182828    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:16.182923    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:21.184634    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:21.184654    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:26.186894    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:26.186979    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:31.189647    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:31.189731    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:36.191549    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:36.191661    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:46:36.216253    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:46:36.216344    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:46:36.229813    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:46:36.229887    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:46:36.241425    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:46:36.241495    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:46:36.267250    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:46:36.267312    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:46:36.277474    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:46:36.277534    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:46:36.288858    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:46:36.288921    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:46:36.299287    9587 logs.go:276] 0 containers: []
	W0318 13:46:36.299298    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:46:36.299356    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:46:36.311345    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:46:36.311362    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:46:36.311367    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:46:36.336372    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:46:36.336384    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:46:36.348783    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:46:36.348796    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:46:36.389064    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:46:36.389075    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:46:36.403546    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:46:36.403556    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:46:36.414806    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:46:36.414815    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:46:36.426110    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:46:36.426120    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:46:36.445893    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:46:36.445904    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:46:36.457776    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:46:36.457790    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:46:36.469750    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:46:36.469763    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:46:36.473995    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:46:36.474004    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:46:36.487520    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:46:36.487530    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:46:36.525524    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:46:36.525531    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:46:36.600614    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:46:36.600627    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:46:36.615706    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:46:36.615718    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:46:36.627040    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:46:36.627050    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:46:39.148082    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:44.150919    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:44.151315    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:46:44.190346    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:46:44.190474    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:46:44.215272    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:46:44.215377    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:46:44.229805    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:46:44.229880    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:46:44.241951    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:46:44.242008    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:46:44.253848    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:46:44.253908    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:46:44.264311    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:46:44.264372    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:46:44.274567    9587 logs.go:276] 0 containers: []
	W0318 13:46:44.274578    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:46:44.274646    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:46:44.285464    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:46:44.285481    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:46:44.285486    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:46:44.297418    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:46:44.297431    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:46:44.309459    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:46:44.309470    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:46:44.321132    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:46:44.321143    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:46:44.357572    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:46:44.357578    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:46:44.372261    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:46:44.372274    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:46:44.386757    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:46:44.386767    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:46:44.397842    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:46:44.397852    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:46:44.435501    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:46:44.435514    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:46:44.457218    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:46:44.457229    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:46:44.475281    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:46:44.475294    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:46:44.501885    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:46:44.501896    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:46:44.505879    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:46:44.505885    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:46:44.519303    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:46:44.519317    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:46:44.556374    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:46:44.556384    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:46:44.567703    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:46:44.567716    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:46:47.081435    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:46:52.083693    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:46:52.084082    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:46:52.125389    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:46:52.125520    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:46:52.147535    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:46:52.147632    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:46:52.162613    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:46:52.162695    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:46:52.175264    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:46:52.175324    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:46:52.186857    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:46:52.186916    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:46:52.197024    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:46:52.197077    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:46:52.211393    9587 logs.go:276] 0 containers: []
	W0318 13:46:52.211409    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:46:52.211460    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:46:52.221969    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:46:52.221987    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:46:52.221994    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:46:52.226155    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:46:52.226160    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:46:52.245770    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:46:52.245781    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:46:52.256808    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:46:52.256817    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:46:52.269119    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:46:52.269129    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:46:52.280752    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:46:52.280768    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:46:52.317645    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:46:52.317656    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:46:52.331568    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:46:52.331580    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:46:52.345792    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:46:52.345801    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:46:52.363034    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:46:52.363043    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:46:52.378919    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:46:52.378929    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:46:52.392979    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:46:52.392990    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:46:52.405000    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:46:52.405013    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:46:52.416851    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:46:52.416861    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:46:52.454588    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:46:52.454600    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:46:52.479168    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:46:52.479174    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:46:55.017927    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:47:00.020163    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:47:00.020310    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:47:00.032816    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:47:00.032887    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:47:00.045882    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:47:00.045948    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:47:00.056398    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:47:00.056461    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:47:00.066825    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:47:00.066894    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:47:00.077026    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:47:00.077091    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:47:00.087533    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:47:00.087595    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:47:00.097707    9587 logs.go:276] 0 containers: []
	W0318 13:47:00.097720    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:47:00.097772    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:47:00.108370    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:47:00.108387    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:47:00.108398    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:47:00.143133    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:47:00.143150    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:47:00.157792    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:47:00.157802    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:47:00.169867    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:47:00.169879    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:47:00.183598    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:47:00.183613    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:47:00.224383    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:47:00.224394    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:47:00.238502    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:47:00.238510    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:47:00.263639    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:47:00.263648    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:47:00.267993    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:47:00.268000    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:47:00.279840    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:47:00.279854    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:47:00.297846    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:47:00.297857    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:47:00.310254    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:47:00.310269    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:47:00.348802    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:47:00.348810    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:47:00.362842    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:47:00.362854    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:47:00.373699    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:47:00.373711    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:47:00.384965    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:47:00.384975    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:47:02.899247    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:47:07.901940    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:47:07.902369    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:47:07.942963    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:47:07.943089    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:47:07.964216    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:47:07.964306    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:47:07.988124    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:47:07.988200    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:47:08.000478    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:47:08.000556    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:47:08.015447    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:47:08.015530    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:47:08.026193    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:47:08.026254    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:47:08.036587    9587 logs.go:276] 0 containers: []
	W0318 13:47:08.036599    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:47:08.036663    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:47:08.047428    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:47:08.047447    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:47:08.047451    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:47:08.082761    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:47:08.082773    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:47:08.097562    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:47:08.097575    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:47:08.109150    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:47:08.109164    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:47:08.123738    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:47:08.123750    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:47:08.136385    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:47:08.136397    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:47:08.148615    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:47:08.148628    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:47:08.161373    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:47:08.161384    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:47:08.165760    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:47:08.165769    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:47:08.184180    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:47:08.184190    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:47:08.199617    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:47:08.199627    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:47:08.214735    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:47:08.214763    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:47:08.243228    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:47:08.243239    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:47:08.256849    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:47:08.256863    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:47:08.296605    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:47:08.296616    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:47:08.308558    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:47:08.308568    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:47:10.849361    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:47:15.851679    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:47:15.851812    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:47:15.869859    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:47:15.869936    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:47:15.880519    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:47:15.880589    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:47:15.891414    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:47:15.891478    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:47:15.902322    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:47:15.902394    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:47:15.912699    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:47:15.912757    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:47:15.923188    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:47:15.923250    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:47:15.933365    9587 logs.go:276] 0 containers: []
	W0318 13:47:15.933376    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:47:15.933429    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:47:15.943775    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:47:15.943793    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:47:15.943799    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:47:15.979373    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:47:15.979384    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:47:15.994233    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:47:15.994246    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:47:16.009077    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:47:16.009087    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:47:16.020695    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:47:16.020706    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:47:16.025056    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:47:16.025063    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:47:16.037081    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:47:16.037091    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:47:16.055723    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:47:16.055733    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:47:16.067486    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:47:16.067496    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:47:16.094458    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:47:16.094465    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:47:16.132800    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:47:16.132811    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:47:16.149679    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:47:16.149688    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:47:16.189217    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:47:16.189230    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:47:16.203205    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:47:16.203216    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:47:16.215705    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:47:16.215717    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:47:16.227789    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:47:16.227799    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:47:18.741675    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:47:23.743919    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:47:23.744123    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:47:23.772121    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:47:23.772233    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:47:23.788554    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:47:23.788632    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:47:23.801755    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:47:23.801822    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:47:23.813694    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:47:23.813757    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:47:23.824284    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:47:23.824353    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:47:23.834510    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:47:23.834573    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:47:23.844797    9587 logs.go:276] 0 containers: []
	W0318 13:47:23.844810    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:47:23.844860    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:47:23.864120    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:47:23.864142    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:47:23.864148    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:47:23.878782    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:47:23.878796    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:47:23.890082    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:47:23.890093    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:47:23.907815    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:47:23.907825    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:47:23.932632    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:47:23.932640    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:47:23.968782    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:47:23.968795    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:47:23.973042    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:47:23.973050    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:47:23.984475    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:47:23.984485    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:47:23.995856    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:47:23.995867    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:47:24.032712    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:47:24.032718    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:47:24.047278    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:47:24.047288    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:47:24.084401    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:47:24.084412    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:47:24.096428    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:47:24.096439    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:47:24.111663    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:47:24.111673    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:47:24.122940    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:47:24.122956    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:47:24.134925    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:47:24.134939    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:47:26.653940    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:47:31.656516    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:47:31.656612    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:47:31.667382    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:47:31.667448    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:47:31.677723    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:47:31.677780    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:47:31.692265    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:47:31.692334    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:47:31.702661    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:47:31.702719    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:47:31.713034    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:47:31.713093    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:47:31.723881    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:47:31.723946    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:47:31.734441    9587 logs.go:276] 0 containers: []
	W0318 13:47:31.734450    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:47:31.734500    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:47:31.745356    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:47:31.745373    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:47:31.745378    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:47:31.786354    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:47:31.786365    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:47:31.802337    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:47:31.802347    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:47:31.815478    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:47:31.815489    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:47:31.832984    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:47:31.832997    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:47:31.870036    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:47:31.870050    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:47:31.885198    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:47:31.885210    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:47:31.899977    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:47:31.899986    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:47:31.913183    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:47:31.913198    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:47:31.925349    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:47:31.925359    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:47:31.929608    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:47:31.929615    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:47:31.940761    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:47:31.940775    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:47:31.964711    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:47:31.964720    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:47:31.976492    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:47:31.976502    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:47:32.014839    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:47:32.014846    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:47:32.029191    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:47:32.029206    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:47:34.543260    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:47:39.545620    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:47:39.546055    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:47:39.587144    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:47:39.587268    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:47:39.608460    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:47:39.608563    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:47:39.623733    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:47:39.623799    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:47:39.637689    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:47:39.637747    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:47:39.648530    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:47:39.648612    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:47:39.659537    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:47:39.659590    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:47:39.670377    9587 logs.go:276] 0 containers: []
	W0318 13:47:39.670389    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:47:39.670440    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:47:39.681041    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:47:39.681060    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:47:39.681065    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:47:39.693265    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:47:39.693277    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:47:39.697405    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:47:39.697414    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:47:39.712549    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:47:39.712560    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:47:39.751510    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:47:39.751529    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:47:39.793807    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:47:39.793819    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:47:39.811906    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:47:39.811917    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:47:39.824109    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:47:39.824120    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:47:39.848674    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:47:39.848684    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:47:39.862236    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:47:39.862247    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:47:39.900828    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:47:39.900840    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:47:39.913693    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:47:39.913704    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:47:39.932835    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:47:39.932846    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:47:39.950068    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:47:39.950081    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:47:39.963335    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:47:39.963346    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:47:39.980761    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:47:39.980774    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:47:42.513902    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:47:47.516089    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:47:47.516254    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:47:47.528434    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:47:47.528508    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:47:47.539580    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:47:47.539652    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:47:47.550133    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:47:47.550196    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:47:47.561011    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:47:47.561086    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:47:47.571526    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:47:47.571586    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:47:47.582168    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:47:47.582237    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:47:47.592056    9587 logs.go:276] 0 containers: []
	W0318 13:47:47.592070    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:47:47.592130    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:47:47.608940    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:47:47.608955    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:47:47.608960    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:47:47.655668    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:47:47.655678    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:47:47.670978    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:47:47.670990    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:47:47.708880    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:47:47.708894    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:47:47.723157    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:47:47.723167    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:47:47.758688    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:47:47.758701    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:47:47.773378    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:47:47.773389    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:47:47.788028    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:47:47.788039    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:47:47.800071    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:47:47.800082    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:47:47.814244    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:47:47.814254    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:47:47.840074    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:47:47.840087    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:47:47.853666    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:47:47.853677    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:47:47.867457    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:47:47.867468    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:47:47.878672    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:47:47.878687    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:47:47.895938    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:47:47.895949    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:47:47.915560    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:47:47.915588    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:47:50.422079    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:47:55.424034    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:47:55.424474    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:47:55.466417    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:47:55.466550    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:47:55.489182    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:47:55.489297    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:47:55.504557    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:47:55.504636    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:47:55.516729    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:47:55.516795    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:47:55.527952    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:47:55.528025    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:47:55.538695    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:47:55.538759    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:47:55.549706    9587 logs.go:276] 0 containers: []
	W0318 13:47:55.549719    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:47:55.549777    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:47:55.560386    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:47:55.560403    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:47:55.560410    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:47:55.572007    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:47:55.572020    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:47:55.609452    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:47:55.609462    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:47:55.614044    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:47:55.614053    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:47:55.628486    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:47:55.628500    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:47:55.665621    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:47:55.665635    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:47:55.679587    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:47:55.679598    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:47:55.697649    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:47:55.697664    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:47:55.709833    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:47:55.709843    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:47:55.725509    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:47:55.725520    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:47:55.737438    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:47:55.737450    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:47:55.751817    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:47:55.751831    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:47:55.763066    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:47:55.763077    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:47:55.778033    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:47:55.778045    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:47:55.815804    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:47:55.815817    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:47:55.828723    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:47:55.828737    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:47:58.354879    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:03.357021    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:03.357143    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:03.368780    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:03.368853    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:03.379401    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:03.379456    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:03.399532    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:03.399597    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:03.410955    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:03.411039    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:03.423458    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:03.423557    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:03.435087    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:03.435155    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:03.448198    9587 logs.go:276] 0 containers: []
	W0318 13:48:03.448211    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:03.448268    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:03.459241    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:03.459259    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:03.459265    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:03.473816    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:03.473834    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:03.489063    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:03.489075    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:03.506995    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:03.507015    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:03.512244    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:03.512256    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:03.529689    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:03.529700    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:03.547302    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:03.547314    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:03.573469    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:03.573490    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:03.613603    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:03.613623    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:03.630403    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:03.630417    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:03.643546    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:03.643559    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:03.655816    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:03.655829    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:03.668649    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:03.668663    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:03.706748    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:03.706760    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:03.718626    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:03.718639    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:03.731054    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:03.731065    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:06.272155    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:11.274217    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:11.274339    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:11.285470    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:11.285539    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:11.296797    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:11.296872    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:11.309962    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:11.310026    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:11.320568    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:11.320637    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:11.331234    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:11.331299    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:11.342122    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:11.342183    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:11.354380    9587 logs.go:276] 0 containers: []
	W0318 13:48:11.354394    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:11.354443    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:11.365273    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:11.365289    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:11.365294    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:11.370683    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:11.370691    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:11.384813    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:11.384824    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:11.408267    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:11.408277    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:11.419452    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:11.419462    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:11.458233    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:11.458243    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:11.476264    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:11.476273    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:11.491073    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:11.491086    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:11.502566    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:11.502577    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:11.520964    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:11.520977    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:11.557616    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:11.557631    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:11.570239    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:11.570250    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:11.582793    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:11.582804    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:11.621679    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:11.621688    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:11.634673    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:11.634683    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:11.646607    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:11.646618    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:14.173110    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:19.175501    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:19.175945    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:19.217327    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:19.217475    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:19.238289    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:19.238403    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:19.253574    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:19.253648    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:19.266047    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:19.266130    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:19.277542    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:19.277612    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:19.288085    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:19.288153    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:19.298734    9587 logs.go:276] 0 containers: []
	W0318 13:48:19.298748    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:19.298809    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:19.309877    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:19.309895    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:19.309901    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:19.321928    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:19.321938    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:19.333741    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:19.333753    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:19.345910    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:19.345922    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:19.380421    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:19.380432    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:19.420034    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:19.420048    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:19.433687    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:19.433697    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:19.471440    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:19.471448    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:19.475665    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:19.475671    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:19.487382    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:19.487394    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:19.499286    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:19.499300    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:19.524974    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:19.524986    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:19.539747    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:19.539757    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:19.550719    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:19.550734    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:19.565961    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:19.565974    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:19.584185    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:19.584198    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:22.111250    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:27.113780    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:27.113921    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:27.129786    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:27.129863    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:27.143509    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:27.143582    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:27.155720    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:27.155799    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:27.168036    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:27.168114    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:27.180167    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:27.180231    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:27.195887    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:27.195959    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:27.207253    9587 logs.go:276] 0 containers: []
	W0318 13:48:27.207265    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:27.207326    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:27.225961    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:27.225980    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:27.225987    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:27.243712    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:27.243729    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:27.259494    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:27.259507    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:27.271739    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:27.271750    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:27.298036    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:27.298051    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:27.303273    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:27.303285    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:27.343255    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:27.343266    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:27.381836    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:27.381852    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:27.394316    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:27.394330    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:27.406972    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:27.406984    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:27.445944    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:27.445960    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:27.460172    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:27.460182    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:27.479013    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:27.479024    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:27.493838    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:27.493851    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:27.505860    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:27.505876    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:27.524010    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:27.524021    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:30.038160    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:35.040701    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:35.040790    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:35.054306    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:35.054374    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:35.065221    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:35.065286    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:35.076225    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:35.076288    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:35.087005    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:35.087079    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:35.097376    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:35.097436    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:35.108161    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:35.108236    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:35.118196    9587 logs.go:276] 0 containers: []
	W0318 13:48:35.118207    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:35.118255    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:35.128926    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:35.128941    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:35.128946    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:35.165493    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:35.165503    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:35.182635    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:35.182646    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:35.195920    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:35.195930    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:35.220673    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:35.220682    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:35.232185    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:35.232196    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:35.244889    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:35.244899    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:35.283323    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:35.283331    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:35.287402    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:35.287412    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:35.302620    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:35.302632    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:35.315410    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:35.315420    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:35.334495    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:35.334510    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:35.345858    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:35.345871    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:35.381352    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:35.381365    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:35.395781    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:35.395792    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:35.406884    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:35.406894    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:37.920645    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:42.922982    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:42.923101    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:42.935433    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:42.935506    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:42.946582    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:42.946656    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:42.958951    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:42.959012    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:42.972381    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:42.972457    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:42.984423    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:42.984492    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:42.995222    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:42.995285    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:43.006306    9587 logs.go:276] 0 containers: []
	W0318 13:48:43.006320    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:43.006376    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:43.016964    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:43.016981    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:43.016988    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:43.055613    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:43.055628    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:43.060643    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:43.060650    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:43.102346    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:43.102357    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:43.141865    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:43.141875    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:43.153479    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:43.153488    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:43.165534    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:43.165546    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:43.180490    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:43.180501    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:43.205036    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:43.205044    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:43.219592    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:43.219601    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:43.232829    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:43.232840    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:43.248089    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:43.248100    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:43.258822    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:43.258832    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:43.270283    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:43.270293    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:43.288238    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:43.288246    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:43.300204    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:43.300215    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:45.814601    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:50.816763    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:50.816890    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:50.827959    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:50.828032    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:50.839783    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:50.839857    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:50.851751    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:50.851821    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:50.863149    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:50.863280    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:50.875113    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:50.875182    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:50.887871    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:50.887948    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:50.899906    9587 logs.go:276] 0 containers: []
	W0318 13:48:50.899919    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:50.899983    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:50.910598    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:50.910615    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:50.910622    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:50.948621    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:50.948638    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:50.992643    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:50.992662    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:51.006116    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:51.006131    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:51.044067    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:51.044079    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:51.059327    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:51.059340    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:51.080150    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:51.080164    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:51.091926    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:51.091936    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:51.103589    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:51.103602    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:51.117689    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:51.117702    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:51.129661    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:51.129677    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:51.153941    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:51.153955    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:51.158738    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:51.158745    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:51.178395    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:51.178407    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:51.192378    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:51.192390    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:51.205782    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:51.205793    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:53.722930    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:58.725315    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:58.725541    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:58.749190    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:58.749305    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:58.764585    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:58.764666    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:58.776354    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:58.776428    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:58.787125    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:58.787198    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:58.797733    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:58.797804    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:58.812899    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:58.812966    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:58.823038    9587 logs.go:276] 0 containers: []
	W0318 13:48:58.823049    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:58.823107    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:58.833150    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:58.833166    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:58.833174    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:58.870860    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:58.870871    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:58.884770    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:58.884781    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:58.896801    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:58.896816    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:58.931671    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:58.931682    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:58.949303    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:58.949314    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:58.961469    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:58.961480    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:58.965678    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:58.965687    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:58.981079    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:58.981092    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:58.999396    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:58.999406    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:59.014288    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:59.014298    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:59.025648    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:59.025663    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:59.038012    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:59.038023    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:59.050694    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:59.050705    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:59.092407    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:59.092421    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:59.106448    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:59.106462    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:01.631559    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:06.633825    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:06.633916    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:06.644629    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:49:06.644701    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:06.655686    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:49:06.655750    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:06.668072    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:49:06.668139    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:06.678618    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:49:06.678678    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:06.689681    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:49:06.689755    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:06.701086    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:49:06.701149    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:06.711831    9587 logs.go:276] 0 containers: []
	W0318 13:49:06.711844    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:06.711895    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:06.722585    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:49:06.722605    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:49:06.722611    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:49:06.744927    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:49:06.744938    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:49:06.762187    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:49:06.762198    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:49:06.778348    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:49:06.778359    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:49:06.790659    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:06.790668    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:06.826204    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:06.826215    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:06.830635    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:49:06.830642    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:49:06.868475    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:49:06.868488    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:49:06.882854    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:49:06.882869    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:49:06.903882    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:06.903893    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:06.941703    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:49:06.941711    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:49:06.955651    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:49:06.955662    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:49:06.967142    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:49:06.967154    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:49:06.978906    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:06.978918    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:07.002943    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:49:07.002952    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:07.014752    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:49:07.014765    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:49:09.530877    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:14.531521    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:14.531673    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:14.548959    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:49:14.549054    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:14.562061    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:49:14.562132    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:14.578097    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:49:14.578164    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:14.594904    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:49:14.594968    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:14.605616    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:49:14.605680    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:14.616248    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:49:14.616318    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:14.628227    9587 logs.go:276] 0 containers: []
	W0318 13:49:14.628240    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:14.628296    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:14.640357    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:49:14.640376    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:49:14.640381    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:49:14.652838    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:14.652848    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:14.657555    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:49:14.657564    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:49:14.671566    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:49:14.671579    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:49:14.683107    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:49:14.683119    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:49:14.695997    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:49:14.696009    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:49:14.713135    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:49:14.713148    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:49:14.730729    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:49:14.730742    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:49:14.745550    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:14.745559    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:14.767853    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:49:14.767861    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:14.780420    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:49:14.780430    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:49:14.792269    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:49:14.792280    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:49:14.804472    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:14.804485    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:14.842984    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:14.843000    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:14.877279    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:49:14.877289    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:49:14.891104    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:49:14.891118    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:49:17.437205    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:22.439810    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:22.440274    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:22.479255    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:49:22.479416    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:22.500509    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:49:22.500598    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:22.515834    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:49:22.515899    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:22.528445    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:49:22.528507    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:22.539869    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:49:22.539933    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:22.551164    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:49:22.551233    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:22.566577    9587 logs.go:276] 0 containers: []
	W0318 13:49:22.566589    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:22.566649    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:22.578708    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:49:22.578729    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:22.578735    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:22.603117    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:22.603139    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:22.643628    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:49:22.643640    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:49:22.662379    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:49:22.662392    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:49:22.675322    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:49:22.675334    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:49:22.692997    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:49:22.693010    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:49:22.711413    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:49:22.711425    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:49:22.726994    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:49:22.727004    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:49:22.738864    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:22.738878    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:22.776497    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:22.776507    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:22.780989    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:49:22.780997    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:49:22.799388    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:49:22.799402    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:49:22.837091    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:49:22.837101    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:49:22.848802    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:49:22.848812    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:49:22.863268    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:49:22.863277    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:49:22.875685    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:49:22.875696    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:25.392857    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:30.395127    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:30.395313    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:30.419343    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:49:30.419445    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:30.435698    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:49:30.435775    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:30.448638    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:49:30.448706    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:30.460306    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:49:30.460371    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:30.470636    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:49:30.470707    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:30.480984    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:49:30.481047    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:30.491396    9587 logs.go:276] 0 containers: []
	W0318 13:49:30.491408    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:30.491468    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:30.502174    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:49:30.502191    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:30.502197    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:30.506796    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:49:30.506803    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:49:30.518703    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:49:30.518715    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:49:30.533090    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:30.533100    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:30.572021    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:30.572030    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:30.607361    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:49:30.607372    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:49:30.631975    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:49:30.631987    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:49:30.653578    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:49:30.653591    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:49:30.675102    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:49:30.675113    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:49:30.686725    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:49:30.686741    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:49:30.703955    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:49:30.703966    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:49:30.746005    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:30.746015    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:30.770193    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:49:30.770202    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:30.781781    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:49:30.781793    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:49:30.796562    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:49:30.796571    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:49:30.811067    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:49:30.811076    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:49:33.324170    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:38.326420    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:38.326561    9587 kubeadm.go:591] duration metric: took 4m4.416703375s to restartPrimaryControlPlane
	W0318 13:49:38.326696    9587 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:49:38.326741    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 13:49:39.401167    9587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.074415666s)
	I0318 13:49:39.401233    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:49:39.406371    9587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:39.409276    9587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:39.412097    9587 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:39.412102    9587 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:39.412123    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/admin.conf
	I0318 13:49:39.414486    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:39.414505    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:39.417585    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/kubelet.conf
	I0318 13:49:39.420846    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:39.420866    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:39.423490    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/controller-manager.conf
	I0318 13:49:39.425984    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:39.426007    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:39.429123    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/scheduler.conf
	I0318 13:49:39.432029    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:39.432052    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:39.434704    9587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:49:39.450443    9587 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 13:49:39.450472    9587 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:49:39.506544    9587 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:49:39.506635    9587 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:49:39.506684    9587 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:49:39.555676    9587 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:49:39.562897    9587 out.go:204]   - Generating certificates and keys ...
	I0318 13:49:39.562936    9587 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:49:39.562965    9587 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:49:39.563004    9587 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:49:39.563040    9587 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:49:39.563074    9587 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:49:39.563106    9587 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:49:39.563139    9587 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:49:39.563180    9587 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:49:39.563220    9587 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:49:39.563258    9587 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:49:39.563285    9587 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:49:39.563319    9587 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:49:39.627338    9587 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:49:39.687285    9587 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:49:39.754816    9587 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:49:39.870379    9587 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:49:39.901926    9587 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:49:39.902335    9587 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:49:39.902449    9587 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:49:39.991377    9587 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:49:39.995847    9587 out.go:204]   - Booting up control plane ...
	I0318 13:49:39.995915    9587 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:49:39.995965    9587 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:49:39.996042    9587 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:49:39.996087    9587 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:49:39.996188    9587 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:49:44.497549    9587 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504757 seconds
	I0318 13:49:44.497708    9587 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:49:44.502177    9587 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:49:45.028121    9587 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:49:45.028488    9587 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-647000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:49:45.533017    9587 kubeadm.go:309] [bootstrap-token] Using token: vlu6oa.7j8asp2g3j3jbv2u
	I0318 13:49:45.538781    9587 out.go:204]   - Configuring RBAC rules ...
	I0318 13:49:45.538843    9587 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:49:45.538905    9587 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:49:45.542214    9587 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:49:45.543200    9587 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:49:45.544154    9587 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:49:45.545106    9587 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:49:45.548530    9587 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:49:45.716140    9587 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:49:45.938845    9587 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:49:45.939229    9587 kubeadm.go:309] 
	I0318 13:49:45.939262    9587 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:49:45.939266    9587 kubeadm.go:309] 
	I0318 13:49:45.939305    9587 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:49:45.939308    9587 kubeadm.go:309] 
	I0318 13:49:45.939319    9587 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:49:45.939359    9587 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:49:45.939386    9587 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:49:45.939404    9587 kubeadm.go:309] 
	I0318 13:49:45.939431    9587 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:49:45.939434    9587 kubeadm.go:309] 
	I0318 13:49:45.939457    9587 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:49:45.939461    9587 kubeadm.go:309] 
	I0318 13:49:45.939523    9587 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:49:45.939578    9587 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:49:45.939620    9587 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:49:45.939625    9587 kubeadm.go:309] 
	I0318 13:49:45.939674    9587 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:49:45.939730    9587 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:49:45.939741    9587 kubeadm.go:309] 
	I0318 13:49:45.939801    9587 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vlu6oa.7j8asp2g3j3jbv2u \
	I0318 13:49:45.939856    9587 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f245f57130bb8b4395382cd74200f36af238eb522c12e31804ffbb421429194 \
	I0318 13:49:45.939867    9587 kubeadm.go:309] 	--control-plane 
	I0318 13:49:45.939870    9587 kubeadm.go:309] 
	I0318 13:49:45.939914    9587 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:49:45.939916    9587 kubeadm.go:309] 
	I0318 13:49:45.939983    9587 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vlu6oa.7j8asp2g3j3jbv2u \
	I0318 13:49:45.940060    9587 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f245f57130bb8b4395382cd74200f36af238eb522c12e31804ffbb421429194 
	I0318 13:49:45.940121    9587 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:49:45.940129    9587 cni.go:84] Creating CNI manager for ""
	I0318 13:49:45.940137    9587 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:49:45.949774    9587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:49:45.953951    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:49:45.957096    9587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:49:45.961925    9587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:49:45.961966    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:49:45.962028    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-647000 minikube.k8s.io/updated_at=2024_03_18T13_49_45_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=running-upgrade-647000 minikube.k8s.io/primary=true
	I0318 13:49:46.000905    9587 kubeadm.go:1107] duration metric: took 38.971791ms to wait for elevateKubeSystemPrivileges
	I0318 13:49:46.000910    9587 ops.go:34] apiserver oom_adj: -16
	W0318 13:49:46.001005    9587 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:49:46.001010    9587 kubeadm.go:393] duration metric: took 4m12.105172333s to StartCluster
	I0318 13:49:46.001024    9587 settings.go:142] acquiring lock: {Name:mkb16a292265123b9734bd031ef06799b38c3f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:46.001177    9587 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:49:46.001579    9587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/kubeconfig: {Name:mk6a62990bf9328d54440f15380010f8199a9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:46.001793    9587 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:49:46.005773    9587 out.go:177] * Verifying Kubernetes components...
	I0318 13:49:46.001815    9587 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:49:46.001961    9587 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:49:46.013812    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:46.013827    9587 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-647000"
	I0318 13:49:46.013851    9587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-647000"
	I0318 13:49:46.013837    9587 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-647000"
	I0318 13:49:46.013866    9587 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-647000"
	W0318 13:49:46.013871    9587 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:49:46.013880    9587 host.go:66] Checking if "running-upgrade-647000" exists ...
	I0318 13:49:46.015149    9587 kapi.go:59] client config for running-upgrade-647000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/client.key", CAFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105bcea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:49:46.015741    9587 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-647000"
	W0318 13:49:46.015746    9587 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:49:46.015753    9587 host.go:66] Checking if "running-upgrade-647000" exists ...
	I0318 13:49:46.019869    9587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:46.022855    9587 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:49:46.022860    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:49:46.022867    9587 sshutil.go:53] new ssh client: &{IP:localhost Port:51134 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0318 13:49:46.023731    9587 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:49:46.023736    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:49:46.023740    9587 sshutil.go:53] new ssh client: &{IP:localhost Port:51134 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0318 13:49:46.100840    9587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:46.105918    9587 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:46.105955    9587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:46.110444    9587 api_server.go:72] duration metric: took 108.638333ms to wait for apiserver process to appear ...
	I0318 13:49:46.110451    9587 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:49:46.110459    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:46.120745    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:49:46.122396    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:49:51.112656    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:51.112734    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:56.113239    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:56.113264    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:01.113658    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:01.113734    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:06.114270    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:06.114288    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:11.114905    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:11.114936    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:16.115810    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:16.115838    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 13:50:16.460270    9587 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 13:50:16.466390    9587 out.go:177] * Enabled addons: storage-provisioner
	I0318 13:50:16.477311    9587 addons.go:505] duration metric: took 30.47566075s for enable addons: enabled=[storage-provisioner]
	I0318 13:50:21.116899    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:21.116922    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:26.117554    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:26.117603    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:31.119229    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:31.119278    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:36.121340    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:36.121379    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:41.123450    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:41.123494    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:46.124813    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:46.124907    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:46.136879    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:50:46.136949    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:46.148041    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:50:46.148111    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:46.158586    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:50:46.158661    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:46.169680    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:50:46.169745    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:46.180319    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:50:46.180390    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:46.191029    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:50:46.191094    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:46.201019    9587 logs.go:276] 0 containers: []
	W0318 13:50:46.201030    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:46.201087    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:46.211373    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:50:46.211387    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:50:46.211393    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:46.223480    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:46.223495    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:46.258010    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:50:46.258021    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:50:46.272365    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:50:46.272377    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:50:46.283747    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:50:46.283758    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:50:46.295096    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:50:46.295107    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:50:46.306583    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:50:46.306595    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:50:46.324255    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:46.324265    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:46.348175    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:46.348183    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:46.352529    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:46.352539    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:46.396503    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:50:46.396515    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:50:46.411127    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:50:46.411139    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:50:46.423031    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:50:46.423042    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:50:48.939322    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:53.940523    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:53.940725    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:53.965096    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:50:53.965209    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:53.981326    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:50:53.981405    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:53.994768    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:50:53.994834    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:54.006171    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:50:54.006237    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:54.016552    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:50:54.016616    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:54.027294    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:50:54.027359    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:54.037138    9587 logs.go:276] 0 containers: []
	W0318 13:50:54.037149    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:54.037199    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:54.047917    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:50:54.047933    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:50:54.047938    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:50:54.065328    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:54.065339    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:54.090364    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:50:54.090380    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:54.101683    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:54.101694    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:54.136450    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:50:54.136461    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:50:54.150776    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:50:54.150789    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:50:54.165528    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:50:54.165539    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:50:54.178731    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:50:54.178741    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:50:54.190752    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:50:54.190765    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:50:54.209160    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:50:54.209171    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:50:54.220732    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:50:54.220745    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:50:54.233553    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:54.233564    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:54.237925    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:54.237932    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:56.784008    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:01.786248    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:01.786450    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:01.808316    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:01.808412    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:01.821445    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:01.821520    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:01.833354    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:01.833423    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:01.843388    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:01.843456    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:01.853492    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:01.853563    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:01.864400    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:01.864468    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:01.874287    9587 logs.go:276] 0 containers: []
	W0318 13:51:01.874300    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.874365    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:01.885037    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:01.885053    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:01.885059    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:01.898504    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:01.898514    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:01.910597    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:01.910610    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:01.925433    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:01.925445    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:01.939793    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.939803    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.973903    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.973912    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.978624    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.978632    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:02.014681    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:02.014691    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:02.029482    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:02.029493    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:02.041109    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:02.041120    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:02.053096    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:02.053106    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:02.070490    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:02.070499    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:02.081604    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:02.081616    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:04.605165    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:09.607602    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:09.607757    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:09.625094    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:09.625167    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:09.637216    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:09.637290    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:09.647861    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:09.647924    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:09.657928    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:09.657985    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:09.668084    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:09.668155    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:09.678238    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:09.678298    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:09.691888    9587 logs.go:276] 0 containers: []
	W0318 13:51:09.691901    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:09.691948    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:09.701988    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:09.702001    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:09.702006    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:09.735336    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:09.735345    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:09.769474    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:09.769485    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:09.784148    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:09.784159    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:09.798820    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:09.798834    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:09.810156    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:09.810169    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:09.824919    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:09.824929    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:09.836650    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:09.836663    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:09.856643    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:09.856653    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:09.868096    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:09.868109    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:09.873164    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:09.873173    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:09.888091    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:09.888101    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:09.899066    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:09.899076    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:12.423378    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:17.423971    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:17.424090    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:17.436582    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:17.436681    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:17.446881    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:17.446938    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:17.457173    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:17.457232    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:17.467601    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:17.467670    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:17.478488    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:17.478560    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:17.489005    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:17.489066    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:17.499042    9587 logs.go:276] 0 containers: []
	W0318 13:51:17.499053    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:17.499107    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:17.509145    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:17.509160    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:17.509165    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:17.523889    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:17.523899    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:17.535235    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:17.535245    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:17.549616    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:17.549626    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:17.561019    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:17.561030    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:17.584864    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.584873    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.618121    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:17.618135    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:17.622415    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:17.622423    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:17.636933    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.636943    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.648335    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:17.648345    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:17.665647    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.665660    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:17.700445    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:17.700457    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:17.715404    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:17.715418    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:20.228985    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:25.231178    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:25.231284    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:25.242775    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:25.242846    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:25.252983    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:25.253053    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:25.263262    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:25.263330    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:25.273489    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:25.273548    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:25.284243    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:25.284321    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:25.294244    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:25.294313    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:25.304123    9587 logs.go:276] 0 containers: []
	W0318 13:51:25.304134    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:25.304190    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:25.315928    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:25.315943    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:25.315949    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:25.320951    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:25.320958    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:25.337553    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:25.337564    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:25.355155    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:25.355169    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:25.367384    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:25.367394    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:25.378818    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:25.378829    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:25.393716    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:25.393725    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:25.405328    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:25.405338    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:25.430069    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:25.430077    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:25.463750    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:25.463759    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:25.505844    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:25.505856    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:25.519967    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:25.519977    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:25.541856    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:25.541865    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:28.058500    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:33.060743    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:33.060865    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:33.071862    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:33.071928    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:33.082051    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:33.082113    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:33.092444    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:33.092508    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:33.107384    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:33.107452    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:33.118455    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:33.118518    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:33.128757    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:33.128875    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:33.139271    9587 logs.go:276] 0 containers: []
	W0318 13:51:33.139280    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:33.139331    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:33.149521    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:33.149533    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:33.149538    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:33.161210    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:33.161220    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:33.186755    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:33.186766    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:33.199365    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:33.199377    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:33.234242    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:33.234257    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:33.251048    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:33.251059    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:33.265735    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:33.265746    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:33.277331    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:33.277341    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:33.294814    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:33.294823    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:33.299538    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:33.299548    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:33.334148    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:33.334157    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:33.348393    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:33.348403    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:33.360161    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:33.360173    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:35.874193    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:40.876576    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:40.876709    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:40.887944    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:40.888014    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:40.902673    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:40.902741    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:40.912887    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:40.912949    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:40.923067    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:40.923136    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:40.939650    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:40.939722    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:40.950053    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:40.950118    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:40.967247    9587 logs.go:276] 0 containers: []
	W0318 13:51:40.967262    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:40.967322    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:40.978857    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:40.978875    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:40.978881    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:40.990750    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:40.990761    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:41.008351    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:41.008362    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:41.019809    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:41.019821    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:41.045304    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:41.045315    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:41.080297    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:41.080307    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:41.094706    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:41.094715    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:41.106443    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:41.106452    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:41.120777    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:41.120785    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:41.132145    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:41.132156    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:41.166829    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:41.166848    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:41.171411    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:41.171419    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:41.194822    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:41.194838    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:43.710190    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:48.710883    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:48.710964    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:48.727279    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:48.727349    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:48.746761    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:48.746831    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:48.759384    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:51:48.759460    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:48.772300    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:48.772375    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:48.783638    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:48.783715    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:48.795201    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:48.795277    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:48.806441    9587 logs.go:276] 0 containers: []
	W0318 13:51:48.806453    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.806505    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:48.817085    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:48.817100    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:48.817107    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:48.828826    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.828838    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.840425    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.840437    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:48.876982    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:48.876996    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:48.890732    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:48.890743    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:48.908667    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:51:48.908681    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:51:48.920211    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:51:48.920225    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:51:48.931334    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:48.931345    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:48.942456    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:48.942466    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:48.965683    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.965690    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.969760    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:48.969765    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:48.984939    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:48.984951    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:48.996501    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:48.996512    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:49.008372    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:49.008382    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:49.042924    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:49.042935    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:51.559369    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:56.561735    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:56.561825    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:56.573522    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:56.573594    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:56.584593    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:56.584663    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:56.595847    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:51:56.595915    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:56.607646    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:56.607721    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:56.618588    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:56.618655    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:56.629600    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:56.629664    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:56.640339    9587 logs.go:276] 0 containers: []
	W0318 13:51:56.640353    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:56.640414    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:56.652360    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:56.652379    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:56.652384    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:56.688721    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:56.688730    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:56.693745    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:51:56.693759    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:51:56.705760    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:56.705771    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:56.724361    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:56.724373    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:56.739433    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:51:56.739450    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:51:56.752470    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:56.752481    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:56.764338    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:56.764349    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:56.780045    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:56.780059    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:56.791762    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:56.791776    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:56.826723    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:56.826732    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:56.841138    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:56.841149    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:56.853095    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:56.853110    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:56.873786    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:56.873800    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:56.885839    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:56.885850    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:59.412661    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:04.414808    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:04.414894    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:04.426861    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:04.426936    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:04.437857    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:04.437923    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:04.449565    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:04.449638    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:04.460923    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:04.460991    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:04.471749    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:04.471819    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:04.483934    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:04.484003    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:04.495602    9587 logs.go:276] 0 containers: []
	W0318 13:52:04.495614    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.495676    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:04.507521    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:04.507536    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:04.507540    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:04.522912    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:04.522920    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:04.539101    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:04.539112    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:04.557816    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.557828    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.593541    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:04.593558    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:04.605491    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:04.605500    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:04.618363    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:04.618373    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:04.635244    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.635254    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.640192    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:04.640203    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:04.656253    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:04.656264    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:04.668586    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:04.668598    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:04.680344    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:04.680354    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:04.704346    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.704355    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:04.740348    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:04.740360    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:04.754846    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.754860    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:07.269450    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:12.271709    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:12.271790    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:12.283335    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:12.283402    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:12.294291    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:12.294353    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:12.305622    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:12.305697    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:12.318012    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:12.318090    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:12.329427    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:12.329496    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:12.340611    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:12.340684    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:12.353154    9587 logs.go:276] 0 containers: []
	W0318 13:52:12.353166    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:12.353227    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:12.364412    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:12.364431    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:12.364437    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:12.380406    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:12.380416    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:12.415085    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:12.415099    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:12.428266    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:12.428278    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:12.443733    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:12.443745    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:12.459764    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:12.459775    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:12.471914    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:12.471924    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:12.509236    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:12.509249    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:12.524243    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:12.524256    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:12.536200    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:12.536212    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:12.555261    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:12.555270    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:12.571725    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:12.571734    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:12.590293    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:12.590303    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:12.604198    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:12.604210    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:12.628473    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:12.628483    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:15.134897    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:20.137107    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:20.137175    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:20.148815    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:20.148887    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:20.160038    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:20.160110    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:20.172054    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:20.172125    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:20.183675    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:20.183751    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:20.194829    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:20.194895    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:20.206498    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:20.206560    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:20.217518    9587 logs.go:276] 0 containers: []
	W0318 13:52:20.217529    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:20.217586    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:20.229007    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:20.229024    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:20.229030    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:20.246633    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:20.246642    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:20.259053    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:20.259061    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:20.272185    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:20.272194    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:20.290661    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:20.290674    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:20.327142    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:20.327155    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:20.342171    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:20.342184    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:20.368646    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:20.368655    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:20.381399    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:20.381410    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:20.419347    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:20.419361    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:20.432477    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:20.432489    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:20.447532    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:20.447544    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:20.452425    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:20.452432    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:20.464356    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:20.464371    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:20.479602    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:20.479614    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:22.997949    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:28.000339    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:28.000743    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:28.032832    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:28.032953    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:28.052821    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:28.052924    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:28.068134    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:28.068211    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:28.080867    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:28.080937    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:28.092496    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:28.092567    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:28.103876    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:28.103941    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:28.115270    9587 logs.go:276] 0 containers: []
	W0318 13:52:28.115282    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:28.115342    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:28.127139    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:28.127157    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:28.127163    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:28.131921    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:28.131932    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:28.169351    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:28.169363    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:28.181347    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:28.181359    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:28.193754    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:28.193767    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:28.206807    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:28.206819    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:28.225941    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:28.225953    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:28.261180    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:28.261203    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:28.277907    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:28.277921    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:28.293337    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:28.293351    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:28.305683    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:28.305696    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:28.318525    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:28.318536    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:28.331138    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:28.331148    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:28.347056    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:28.347070    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:28.372049    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:28.372068    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:30.885059    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:35.887332    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:35.887524    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:35.906564    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:35.906661    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:35.921232    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:35.921304    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:35.933546    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:35.933619    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:35.944709    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:35.944776    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:35.955689    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:35.955755    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:35.967374    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:35.967440    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:35.978714    9587 logs.go:276] 0 containers: []
	W0318 13:52:35.978728    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:35.978787    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:35.990336    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:35.990353    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:35.990359    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:36.025633    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:36.025651    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:36.030911    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:36.030923    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:36.052172    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:36.052185    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:36.078507    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:36.078526    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:36.118904    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:36.118918    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:36.133992    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:36.134007    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:36.146645    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:36.146659    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:36.160094    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:36.160105    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:36.176509    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:36.176522    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:36.189036    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:36.189052    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:36.204362    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:36.204382    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:36.218057    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:36.218069    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:36.231310    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:36.231322    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:36.250813    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:36.250826    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:38.772189    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:43.774485    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:43.774610    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:43.785509    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:43.785563    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:43.800850    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:43.800920    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:43.812941    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:43.813011    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:43.823855    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:43.823923    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:43.834952    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:43.835024    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:43.847154    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:43.847236    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:43.858364    9587 logs.go:276] 0 containers: []
	W0318 13:52:43.858375    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:43.858431    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:43.869263    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:43.869279    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:43.869285    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:43.885662    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:43.885672    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:43.905684    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:43.905699    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:43.932766    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:43.932785    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:43.987751    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:43.987768    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:44.001040    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:44.001052    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:44.005827    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:44.005836    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:44.019732    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:44.019743    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:44.032931    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:44.032948    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:44.048923    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:44.048938    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:44.062635    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:44.062649    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:44.097260    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:44.097281    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:44.112983    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:44.112996    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:44.125914    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:44.125926    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:44.145485    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:44.145499    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:46.660447    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:51.660752    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:51.660946    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:51.678893    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:51.678991    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:51.692901    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:51.692978    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:51.704781    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:51.704856    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:51.716408    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:51.716478    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:51.727397    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:51.727460    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:51.738613    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:51.738677    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:51.749298    9587 logs.go:276] 0 containers: []
	W0318 13:52:51.749309    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:51.749365    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:51.760206    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:51.760223    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:51.760228    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:51.800458    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:51.800470    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:51.815163    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:51.815173    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:51.827878    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:51.827891    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:51.832386    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:51.832393    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:51.845395    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:51.845406    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:51.857832    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:51.857843    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:51.883296    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:51.883303    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:51.902398    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:51.902409    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:51.917570    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:51.917580    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:51.936236    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:51.936246    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:51.947685    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:51.947696    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:51.960904    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:51.960916    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:51.996020    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:51.996029    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:52.011572    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:52.011587    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:54.526379    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:59.528604    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:59.528705    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:59.542877    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:59.542951    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:59.554209    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:59.554282    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:59.565087    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:59.565158    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:59.575502    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:59.575564    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:59.586337    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:59.586410    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:59.597353    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:59.597427    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:59.611833    9587 logs.go:276] 0 containers: []
	W0318 13:52:59.611845    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:59.611906    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:59.622289    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:59.622305    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:59.622310    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:59.633925    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:59.633934    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:59.645324    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:59.645333    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:59.680919    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:59.680933    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:59.702735    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:59.702750    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:59.720580    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:59.720589    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:59.753558    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:59.753568    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:59.758521    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:59.758528    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:59.770772    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:59.770785    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:59.783193    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:59.783210    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:59.799644    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:59.799657    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:59.811385    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:59.811399    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:59.835046    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:59.835054    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:59.846282    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:59.846296    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:59.860909    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:59.860920    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:02.374349    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:07.376679    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:07.376806    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:07.389286    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:07.389354    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:07.401237    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:07.401306    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:07.411779    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:53:07.411855    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:07.422282    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:07.422348    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:07.432741    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:07.432812    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:07.442904    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:07.442974    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:07.452730    9587 logs.go:276] 0 containers: []
	W0318 13:53:07.452743    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:07.452803    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:07.463091    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:07.463107    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:07.463112    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:07.467474    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:07.467485    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:07.505044    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:07.505057    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:53:07.516881    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:53:07.516892    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:53:07.529874    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:07.529887    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:07.549806    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:07.549821    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:07.583632    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:07.583642    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:07.594854    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:07.594868    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:07.606584    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:07.606600    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:07.618192    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:07.618202    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:07.633253    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:07.633266    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:07.647477    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:07.647486    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:07.661104    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:07.661118    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:07.672514    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:07.672524    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:07.684106    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:07.684116    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:10.210647    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:15.212948    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:15.213272    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:15.248273    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:15.248401    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:15.266581    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:15.266674    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:15.280617    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:53:15.280691    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:15.292261    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:15.292338    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:15.303385    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:15.303456    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:15.314447    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:15.314516    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:15.325009    9587 logs.go:276] 0 containers: []
	W0318 13:53:15.325020    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:15.325074    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:15.335920    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:15.335939    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:15.335944    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:15.370273    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:15.370284    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:15.384363    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:15.384373    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:15.399725    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:15.399735    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:15.411920    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:15.411930    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:15.423556    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:15.423567    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:15.427969    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:15.427978    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:15.445220    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:15.445233    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:15.457797    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:15.457811    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:15.492046    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:15.492058    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:15.503672    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:15.503682    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:53:15.515394    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:53:15.515406    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:53:15.527415    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:15.527426    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:15.542082    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:15.542095    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:15.554246    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:15.554257    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:18.079002    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:23.081214    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:23.081351    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:23.105466    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:23.105540    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:23.116878    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:23.116945    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:23.127466    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:53:23.127530    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:23.138418    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:23.138479    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:23.149409    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:23.149471    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:23.160030    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:23.160089    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:23.170448    9587 logs.go:276] 0 containers: []
	W0318 13:53:23.170459    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:23.170514    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:23.181531    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:23.181556    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:23.181561    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:23.195812    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:23.195822    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:23.207352    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:23.207363    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:53:23.218695    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:53:23.218708    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:53:23.230880    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:23.230891    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:23.243418    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:23.243432    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:23.248042    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:23.248048    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:23.285895    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:23.285909    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:23.297571    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:23.297581    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:23.330502    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:23.330513    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:23.344982    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:23.344992    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:23.356164    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:23.356175    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:23.367282    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:23.367295    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:23.382373    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:23.382384    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:23.399936    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:23.399947    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:25.926904    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:30.929156    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:30.929402    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:30.954589    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:30.954709    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:30.970955    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:30.971041    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:30.984923    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:53:30.985004    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:30.996742    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:30.996813    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:31.009593    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:31.009663    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:31.020613    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:31.020679    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:31.031884    9587 logs.go:276] 0 containers: []
	W0318 13:53:31.031898    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:31.031952    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:31.042330    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:31.042346    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:31.042352    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:31.078936    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:31.078950    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:31.093850    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:31.093860    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:31.107136    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:31.107146    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:53:31.119350    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:31.119362    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:31.123673    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:31.123683    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:31.140174    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:31.140185    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:31.152271    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:31.152282    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:31.185729    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:31.185737    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:31.201646    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:31.201655    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:31.213267    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:31.213276    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:31.228447    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:31.228457    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:31.251936    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:31.251946    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:31.263092    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:53:31.263101    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:53:31.274861    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:31.274872    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:33.794320    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:38.796501    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:38.796614    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:38.812056    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:38.812128    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:38.824270    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:38.824328    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:38.840769    9587 logs.go:276] 5 containers: [d95c9d62ad55 60cb95074fd8 a6cc97ce5c62 81b058de957e 16c60d7d510f]
	I0318 13:53:38.840843    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:38.851783    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:38.851852    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:38.869039    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:38.869104    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:38.884660    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:38.884723    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:38.894587    9587 logs.go:276] 0 containers: []
	W0318 13:53:38.894597    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:38.894643    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:38.905307    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:38.905322    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:38.905328    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:38.909863    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:38.909870    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:38.923729    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:38.923742    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:38.935909    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:38.935921    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:38.953841    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:38.953859    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:38.965285    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:38.965296    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:38.999347    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:38.999357    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:39.016022    9587 logs.go:123] Gathering logs for coredns [60cb95074fd8] ...
	I0318 13:53:39.016033    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60cb95074fd8"
	I0318 13:53:39.027659    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:39.027675    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:39.039449    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:39.039464    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:39.054530    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:39.054540    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:39.078599    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:39.078621    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:39.115527    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:39.115540    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:39.127620    9587 logs.go:123] Gathering logs for coredns [d95c9d62ad55] ...
	I0318 13:53:39.127635    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95c9d62ad55"
	I0318 13:53:39.139235    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:39.139247    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	W0318 13:53:39.151153    9587 logs.go:130] failed coredns [16c60d7d510f]: command: /bin/bash -c "docker logs --tail 400 16c60d7d510f" /bin/bash -c "docker logs --tail 400 16c60d7d510f": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 16c60d7d510f
	 output: 
	** stderr ** 
	Error: No such container: 16c60d7d510f
	
	** /stderr **
	I0318 13:53:39.151162    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:39.151168    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:41.664471    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:46.666753    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:46.670020    9587 out.go:177] 
	W0318 13:53:46.674083    9587 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 13:53:46.674096    9587 out.go:239] * 
	* 
	W0318 13:53:46.674929    9587 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:53:46.686040    9587 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-647000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-18 13:53:46.772531 -0700 PDT m=+1508.393809585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-647000 -n running-upgrade-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-647000 -n running-upgrade-647000: exit status 2 (15.643749667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-647000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-570000          | force-systemd-flag-570000 | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-150000              | force-systemd-env-150000  | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-150000           | force-systemd-env-150000  | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT | 18 Mar 24 13:43 PDT |
	| start   | -p docker-flags-563000                | docker-flags-563000       | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-570000             | force-systemd-flag-570000 | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-570000          | force-systemd-flag-570000 | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT | 18 Mar 24 13:43 PDT |
	| start   | -p cert-expiration-526000             | cert-expiration-526000    | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-563000 ssh               | docker-flags-563000       | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-563000 ssh               | docker-flags-563000       | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-563000                | docker-flags-563000       | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT | 18 Mar 24 13:43 PDT |
	| start   | -p cert-options-036000                | cert-options-036000       | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-036000 ssh               | cert-options-036000       | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-036000 -- sudo        | cert-options-036000       | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-036000                | cert-options-036000       | jenkins | v1.32.0 | 18 Mar 24 13:43 PDT | 18 Mar 24 13:43 PDT |
	| start   | -p running-upgrade-647000             | minikube                  | jenkins | v1.26.0 | 18 Mar 24 13:43 PDT | 18 Mar 24 13:45 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-647000             | running-upgrade-647000    | jenkins | v1.32.0 | 18 Mar 24 13:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-526000             | cert-expiration-526000    | jenkins | v1.32.0 | 18 Mar 24 13:46 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-526000             | cert-expiration-526000    | jenkins | v1.32.0 | 18 Mar 24 13:46 PDT | 18 Mar 24 13:46 PDT |
	| start   | -p kubernetes-upgrade-635000          | kubernetes-upgrade-635000 | jenkins | v1.32.0 | 18 Mar 24 13:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-635000          | kubernetes-upgrade-635000 | jenkins | v1.32.0 | 18 Mar 24 13:46 PDT | 18 Mar 24 13:46 PDT |
	| start   | -p kubernetes-upgrade-635000          | kubernetes-upgrade-635000 | jenkins | v1.32.0 | 18 Mar 24 13:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-635000          | kubernetes-upgrade-635000 | jenkins | v1.32.0 | 18 Mar 24 13:46 PDT | 18 Mar 24 13:46 PDT |
	| start   | -p stopped-upgrade-813000             | minikube                  | jenkins | v1.26.0 | 18 Mar 24 13:47 PDT | 18 Mar 24 13:47 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-813000 stop           | minikube                  | jenkins | v1.26.0 | 18 Mar 24 13:47 PDT | 18 Mar 24 13:48 PDT |
	| start   | -p stopped-upgrade-813000             | stopped-upgrade-813000    | jenkins | v1.32.0 | 18 Mar 24 13:48 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:48:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:48:03.278461    9750 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:48:03.278592    9750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:48:03.278596    9750 out.go:304] Setting ErrFile to fd 2...
	I0318 13:48:03.278599    9750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:48:03.278742    9750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:48:03.279966    9750 out.go:298] Setting JSON to false
	I0318 13:48:03.299398    9750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6455,"bootTime":1710788428,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:48:03.299482    9750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:48:03.304310    9750 out.go:177] * [stopped-upgrade-813000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:48:03.312256    9750 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:48:03.316264    9750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:48:03.312328    9750 notify.go:220] Checking for updates...
	I0318 13:48:03.322227    9750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:48:03.326288    9750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:48:03.329261    9750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:48:03.337301    9750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:48:03.340594    9750 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:48:03.344162    9750 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 13:48:03.347297    9750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:48:03.351248    9750 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:48:03.358241    9750 start.go:297] selected driver: qemu2
	I0318 13:48:03.358247    9750 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 13:48:03.358295    9750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:48:03.361081    9750 cni.go:84] Creating CNI manager for ""
	I0318 13:48:03.361101    9750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:48:03.361130    9750 start.go:340] cluster config:
	{Name:stopped-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 13:48:03.361192    9750 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:48:03.369202    9750 out.go:177] * Starting "stopped-upgrade-813000" primary control-plane node in "stopped-upgrade-813000" cluster
	I0318 13:48:03.373285    9750 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 13:48:03.373319    9750 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 13:48:03.373329    9750 cache.go:56] Caching tarball of preloaded images
	I0318 13:48:03.373408    9750 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:48:03.373415    9750 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 13:48:03.373472    9750 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/config.json ...
	I0318 13:48:03.373807    9750 start.go:360] acquireMachinesLock for stopped-upgrade-813000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:48:03.373842    9750 start.go:364] duration metric: took 26.75µs to acquireMachinesLock for "stopped-upgrade-813000"
	I0318 13:48:03.373853    9750 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:48:03.373857    9750 fix.go:54] fixHost starting: 
	I0318 13:48:03.373963    9750 fix.go:112] recreateIfNeeded on stopped-upgrade-813000: state=Stopped err=<nil>
	W0318 13:48:03.373971    9750 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:48:03.377265    9750 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-813000" ...
	I0318 13:48:03.357021    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:03.357143    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:03.368780    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:03.368853    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:03.379401    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:03.379456    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:03.399532    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:03.399597    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:03.410955    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:03.411039    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:03.423458    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:03.423557    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:03.435087    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:03.435155    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:03.448198    9587 logs.go:276] 0 containers: []
	W0318 13:48:03.448211    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:03.448268    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:03.459241    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:03.459259    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:03.459265    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:03.473816    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:03.473834    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:03.489063    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:03.489075    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:03.506995    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:03.507015    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:03.512244    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:03.512256    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:03.529689    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:03.529700    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:03.547302    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:03.547314    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:03.573469    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:03.573490    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:03.613603    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:03.613623    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:03.630403    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:03.630417    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:03.643546    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:03.643559    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:03.655816    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:03.655829    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:03.668649    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:03.668663    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:03.706748    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:03.706760    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:03.718626    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:03.718639    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:03.731054    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:03.731065    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:06.272155    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:03.385342    9750 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51326-:22,hostfwd=tcp::51327-:2376,hostname=stopped-upgrade-813000 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/disk.qcow2
	I0318 13:48:03.434980    9750 main.go:141] libmachine: STDOUT: 
	I0318 13:48:03.435023    9750 main.go:141] libmachine: STDERR: 
	I0318 13:48:03.435029    9750 main.go:141] libmachine: Waiting for VM to start (ssh -p 51326 docker@127.0.0.1)...
	I0318 13:48:11.274217    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:11.274339    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:11.285470    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:11.285539    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:11.296797    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:11.296872    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:11.309962    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:11.310026    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:11.320568    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:11.320637    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:11.331234    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:11.331299    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:11.342122    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:11.342183    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:11.354380    9587 logs.go:276] 0 containers: []
	W0318 13:48:11.354394    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:11.354443    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:11.365273    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:11.365289    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:11.365294    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:11.370683    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:11.370691    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:11.384813    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:11.384824    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:11.408267    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:11.408277    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:11.419452    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:11.419462    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:11.458233    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:11.458243    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:11.476264    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:11.476273    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:11.491073    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:11.491086    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:11.502566    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:11.502577    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:11.520964    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:11.520977    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:11.557616    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:11.557631    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:11.570239    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:11.570250    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:11.582793    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:11.582804    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:11.621679    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:11.621688    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:11.634673    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:11.634683    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:11.646607    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:11.646618    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:14.173110    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:19.175501    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:19.175945    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:19.217327    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:19.217475    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:19.238289    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:19.238403    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:19.253574    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:19.253648    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:19.266047    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:19.266130    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:19.277542    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:19.277612    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:19.288085    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:19.288153    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:19.298734    9587 logs.go:276] 0 containers: []
	W0318 13:48:19.298748    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:19.298809    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:19.309877    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:19.309895    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:19.309901    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:19.321928    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:19.321938    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:19.333741    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:19.333753    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:19.345910    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:19.345922    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:19.380421    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:19.380432    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:19.420034    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:19.420048    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:19.433687    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:19.433697    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:19.471440    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:19.471448    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:19.475665    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:19.475671    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:19.487382    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:19.487394    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:19.499286    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:19.499300    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:19.524974    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:19.524986    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:19.539747    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:19.539757    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:19.550719    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:19.550734    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:19.565961    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:19.565974    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:19.584185    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:19.584198    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:22.397216    9750 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/config.json ...
	I0318 13:48:22.397509    9750 machine.go:94] provisionDockerMachine start ...
	I0318 13:48:22.397562    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.397716    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.397723    9750 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:48:22.459680    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:48:22.459698    9750 buildroot.go:166] provisioning hostname "stopped-upgrade-813000"
	I0318 13:48:22.459751    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.459871    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.459876    9750 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-813000 && echo "stopped-upgrade-813000" | sudo tee /etc/hostname
	I0318 13:48:22.523290    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-813000
	
	I0318 13:48:22.523342    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.523462    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.523471    9750 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-813000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-813000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-813000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:48:22.586315    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:48:22.586328    9750 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18421-6777/.minikube CaCertPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18421-6777/.minikube}
	I0318 13:48:22.586336    9750 buildroot.go:174] setting up certificates
	I0318 13:48:22.586341    9750 provision.go:84] configureAuth start
	I0318 13:48:22.586349    9750 provision.go:143] copyHostCerts
	I0318 13:48:22.586424    9750 exec_runner.go:144] found /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.pem, removing ...
	I0318 13:48:22.586429    9750 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.pem
	I0318 13:48:22.586535    9750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.pem (1078 bytes)
	I0318 13:48:22.586749    9750 exec_runner.go:144] found /Users/jenkins/minikube-integration/18421-6777/.minikube/cert.pem, removing ...
	I0318 13:48:22.586753    9750 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18421-6777/.minikube/cert.pem
	I0318 13:48:22.586801    9750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18421-6777/.minikube/cert.pem (1123 bytes)
	I0318 13:48:22.586914    9750 exec_runner.go:144] found /Users/jenkins/minikube-integration/18421-6777/.minikube/key.pem, removing ...
	I0318 13:48:22.586918    9750 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18421-6777/.minikube/key.pem
	I0318 13:48:22.586962    9750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18421-6777/.minikube/key.pem (1679 bytes)
	I0318 13:48:22.587054    9750 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-813000 san=[127.0.0.1 localhost minikube stopped-upgrade-813000]
	I0318 13:48:22.677450    9750 provision.go:177] copyRemoteCerts
	I0318 13:48:22.677482    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:48:22.677490    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:48:22.709545    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:48:22.716288    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:48:22.722874    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:48:22.730001    9750 provision.go:87] duration metric: took 143.651459ms to configureAuth
	I0318 13:48:22.730010    9750 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:48:22.730117    9750 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:48:22.730152    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.730245    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.730249    9750 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 13:48:22.788162    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 13:48:22.788170    9750 buildroot.go:70] root file system type: tmpfs
	I0318 13:48:22.788221    9750 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 13:48:22.788260    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.788358    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.788389    9750 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 13:48:22.851062    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 13:48:22.851114    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.851217    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.851228    9750 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 13:48:23.222358    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 13:48:23.222373    9750 machine.go:97] duration metric: took 824.8625ms to provisionDockerMachine
	I0318 13:48:23.222381    9750 start.go:293] postStartSetup for "stopped-upgrade-813000" (driver="qemu2")
	I0318 13:48:23.222388    9750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:48:23.222456    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:48:23.222468    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:48:23.254873    9750 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:48:23.256307    9750 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 13:48:23.256319    9750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18421-6777/.minikube/addons for local assets ...
	I0318 13:48:23.256395    9750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18421-6777/.minikube/files for local assets ...
	I0318 13:48:23.256510    9750 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem -> 72362.pem in /etc/ssl/certs
	I0318 13:48:23.256633    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:48:23.259194    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem --> /etc/ssl/certs/72362.pem (1708 bytes)
	I0318 13:48:23.266139    9750 start.go:296] duration metric: took 43.751292ms for postStartSetup
	I0318 13:48:23.266154    9750 fix.go:56] duration metric: took 19.892398917s for fixHost
	I0318 13:48:23.266210    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:23.266311    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:23.266316    9750 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:48:23.323703    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710794903.764309837
	
	I0318 13:48:23.323712    9750 fix.go:216] guest clock: 1710794903.764309837
	I0318 13:48:23.323716    9750 fix.go:229] Guest: 2024-03-18 13:48:23.764309837 -0700 PDT Remote: 2024-03-18 13:48:23.266168 -0700 PDT m=+20.024459085 (delta=498.141837ms)
	I0318 13:48:23.323735    9750 fix.go:200] guest clock delta is within tolerance: 498.141837ms
	I0318 13:48:23.323741    9750 start.go:83] releasing machines lock for "stopped-upgrade-813000", held for 19.949992708s
	I0318 13:48:23.323810    9750 ssh_runner.go:195] Run: cat /version.json
	I0318 13:48:23.323819    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:48:23.323810    9750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:48:23.323868    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	W0318 13:48:23.324383    9750 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51326: connect: connection refused
	I0318 13:48:23.324408    9750 retry.go:31] will retry after 342.017178ms: dial tcp [::1]:51326: connect: connection refused
	W0318 13:48:23.351520    9750 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 13:48:23.351568    9750 ssh_runner.go:195] Run: systemctl --version
	I0318 13:48:23.353339    9750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:48:23.354888    9750 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:48:23.354916    9750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 13:48:23.358021    9750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 13:48:23.362633    9750 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:48:23.362642    9750 start.go:494] detecting cgroup driver to use...
	I0318 13:48:23.362721    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:48:23.369689    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 13:48:23.373124    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 13:48:23.375821    9750 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 13:48:23.375853    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 13:48:23.378758    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:48:23.382262    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 13:48:23.385708    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:48:23.388657    9750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:48:23.391424    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 13:48:23.394647    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0318 13:48:23.398069    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0318 13:48:23.401502    9750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:48:23.404164    9750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:48:23.406911    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:23.468783    9750 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 13:48:23.474384    9750 start.go:494] detecting cgroup driver to use...
	I0318 13:48:23.474460    9750 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 13:48:23.483614    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:48:23.488250    9750 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:48:23.497324    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:48:23.501945    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:48:23.506398    9750 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 13:48:23.561953    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:48:23.567235    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:48:23.572619    9750 ssh_runner.go:195] Run: which cri-dockerd
	I0318 13:48:23.573857    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 13:48:23.576425    9750 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 13:48:23.581214    9750 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 13:48:23.658767    9750 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 13:48:23.743875    9750 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 13:48:23.744373    9750 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 13:48:23.750122    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:23.826910    9750 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:48:24.968784    9750 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.141859958s)
	I0318 13:48:24.968846    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 13:48:24.973800    9750 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 13:48:24.979896    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:48:24.984316    9750 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 13:48:25.066772    9750 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 13:48:25.146685    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:25.224346    9750 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 13:48:25.229819    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:48:25.234994    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:25.296231    9750 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 13:48:25.341799    9750 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 13:48:25.341885    9750 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 13:48:25.344821    9750 start.go:562] Will wait 60s for crictl version
	I0318 13:48:25.344874    9750 ssh_runner.go:195] Run: which crictl
	I0318 13:48:25.346112    9750 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:48:25.361262    9750 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 13:48:25.361331    9750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:48:25.379956    9750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:48:22.111250    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:25.402207    9750 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 13:48:25.402321    9750 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 13:48:25.403619    9750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:48:25.407216    9750 kubeadm.go:877] updating cluster {Name:stopped-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 13:48:25.407267    9750 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 13:48:25.407306    9750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:48:25.417661    9750 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 13:48:25.417669    9750 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 13:48:25.417711    9750 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 13:48:25.420947    9750 ssh_runner.go:195] Run: which lz4
	I0318 13:48:25.422157    9750 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:48:25.423286    9750 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:48:25.423295    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 13:48:26.122521    9750 docker.go:649] duration metric: took 700.398875ms to copy over tarball
	I0318 13:48:26.122583    9750 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:48:27.312202    9750 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.18960475s)
	I0318 13:48:27.312217    9750 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:48:27.329024    9750 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 13:48:27.332253    9750 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 13:48:27.337924    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:27.422330    9750 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:48:27.113780    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:27.113921    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:27.129786    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:27.129863    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:27.143509    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:27.143582    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:27.155720    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:27.155799    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:27.168036    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:27.168114    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:27.180167    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:27.180231    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:27.195887    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:27.195959    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:27.207253    9587 logs.go:276] 0 containers: []
	W0318 13:48:27.207265    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:27.207326    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:27.225961    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:27.225980    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:27.225987    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:27.243712    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:27.243729    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:27.259494    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:27.259507    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:27.271739    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:27.271750    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:27.298036    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:27.298051    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:27.303273    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:27.303285    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:27.343255    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:27.343266    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:27.381836    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:27.381852    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:27.394316    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:27.394330    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:27.406972    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:27.406984    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:27.445944    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:27.445960    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:27.460172    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:27.460182    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:27.479013    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:27.479024    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:27.493838    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:27.493851    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:27.505860    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:27.505876    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:27.524010    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:27.524021    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:30.038160    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:28.939162    9750 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.516821s)
	I0318 13:48:28.939265    9750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:48:28.953997    9750 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 13:48:28.954007    9750 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 13:48:28.954012    9750 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:48:28.961543    9750 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:28.961554    9750 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 13:48:28.961793    9750 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:28.961905    9750 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:48:28.961961    9750 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:28.962274    9750 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:28.962523    9750 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:28.962722    9750 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:28.971033    9750 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:28.971103    9750 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:28.971841    9750 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:28.971881    9750 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:48:28.971983    9750 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:28.971963    9750 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:28.971992    9750 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:28.972028    9750 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 13:48:30.921805    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 13:48:30.962632    9750 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 13:48:30.962684    9750 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 13:48:30.962777    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 13:48:30.984481    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:30.987224    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 13:48:30.987346    9750 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0318 13:48:31.001203    9750 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 13:48:31.001233    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 13:48:31.001317    9750 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 13:48:31.001343    9750 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:31.001384    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:31.014028    9750 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 13:48:31.014039    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 13:48:31.015809    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 13:48:31.018434    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:31.045854    9750 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0318 13:48:31.045886    9750 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 13:48:31.045904    9750 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:31.045958    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:31.047681    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0318 13:48:31.048119    9750 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 13:48:31.048203    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:31.054814    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:31.056032    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:31.059158    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 13:48:31.059268    9750 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0318 13:48:31.072057    9750 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 13:48:31.072079    9750 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:48:31.072060    9750 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 13:48:31.072128    9750 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:31.072131    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:48:31.072155    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:31.077328    9750 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 13:48:31.077347    9750 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:31.077400    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:31.079606    9750 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0318 13:48:31.079629    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0318 13:48:31.079658    9750 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 13:48:31.079671    9750 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:31.079701    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:31.111637    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 13:48:31.111638    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 13:48:31.111696    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 13:48:31.111751    9750 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0318 13:48:31.116583    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 13:48:31.118432    9750 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 13:48:31.118461    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 13:48:31.185141    9750 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 13:48:31.185155    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 13:48:31.325908    9750 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 13:48:31.352797    9750 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0318 13:48:31.352810    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0318 13:48:31.404965    9750 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 13:48:31.405091    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:31.493440    9750 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0318 13:48:31.493460    9750 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 13:48:31.493478    9750 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:31.493530    9750 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:31.507028    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:48:31.507141    9750 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:48:31.508557    9750 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 13:48:31.508569    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 13:48:31.534632    9750 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:48:31.534645    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 13:48:31.780874    9750 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 13:48:31.780913    9750 cache_images.go:92] duration metric: took 2.826909209s to LoadCachedImages
	W0318 13:48:31.780956    9750 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0318 13:48:31.780963    9750 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 13:48:31.781027    9750 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-813000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:48:31.781085    9750 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 13:48:31.794106    9750 cni.go:84] Creating CNI manager for ""
	I0318 13:48:31.794119    9750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:48:31.794124    9750 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:48:31.794132    9750 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-813000 NodeName:stopped-upgrade-813000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:48:31.794204    9750 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-813000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:48:31.794260    9750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 13:48:31.797650    9750 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:48:31.797681    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:48:31.800461    9750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 13:48:31.805323    9750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:48:31.810302    9750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 13:48:31.815797    9750 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 13:48:31.817078    9750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:48:31.820636    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:31.884244    9750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:48:31.894227    9750 certs.go:68] Setting up /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000 for IP: 10.0.2.15
	I0318 13:48:31.894237    9750 certs.go:194] generating shared ca certs ...
	I0318 13:48:31.894246    9750 certs.go:226] acquiring lock for ca certs: {Name:mkb77ca79ad1917526a647bf0189e0c89f5a836a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:48:31.894399    9750 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.key
	I0318 13:48:31.895203    9750 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/proxy-client-ca.key
	I0318 13:48:31.895211    9750 certs.go:256] generating profile certs ...
	I0318 13:48:31.895407    9750 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.key
	I0318 13:48:31.895429    9750 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key.b3f91078
	I0318 13:48:31.895442    9750 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt.b3f91078 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 13:48:32.086926    9750 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt.b3f91078 ...
	I0318 13:48:32.086945    9750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt.b3f91078: {Name:mkf4eae5165cc01f8e05b702f75f9a115150bce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:48:32.087278    9750 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key.b3f91078 ...
	I0318 13:48:32.087283    9750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key.b3f91078: {Name:mkeb2db62c86a688fb8027b3cb32820cacd322df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:48:32.087401    9750 certs.go:381] copying /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt.b3f91078 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt
	I0318 13:48:32.087604    9750 certs.go:385] copying /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key.b3f91078 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key
	I0318 13:48:32.087995    9750 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/proxy-client.key
	I0318 13:48:32.088172    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/7236.pem (1338 bytes)
	W0318 13:48:32.088405    9750 certs.go:480] ignoring /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/7236_empty.pem, impossibly tiny 0 bytes
	I0318 13:48:32.088414    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:48:32.088439    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:48:32.088472    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:48:32.088493    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/key.pem (1679 bytes)
	I0318 13:48:32.088548    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem (1708 bytes)
	I0318 13:48:32.088921    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:48:32.096100    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:48:32.102817    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:48:32.109878    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:48:32.116340    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:48:32.122527    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:48:32.130047    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:48:32.137433    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:48:32.144363    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem --> /usr/share/ca-certificates/72362.pem (1708 bytes)
	I0318 13:48:32.150990    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:48:32.157499    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/7236.pem --> /usr/share/ca-certificates/7236.pem (1338 bytes)
	I0318 13:48:32.164069    9750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:48:32.168942    9750 ssh_runner.go:195] Run: openssl version
	I0318 13:48:32.170830    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72362.pem && ln -fs /usr/share/ca-certificates/72362.pem /etc/ssl/certs/72362.pem"
	I0318 13:48:32.174365    9750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72362.pem
	I0318 13:48:32.175911    9750 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:31 /usr/share/ca-certificates/72362.pem
	I0318 13:48:32.175933    9750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72362.pem
	I0318 13:48:32.177609    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:48:32.180578    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:48:32.183440    9750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:48:32.184990    9750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:44 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:48:32.185011    9750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:48:32.186617    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:48:32.189912    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7236.pem && ln -fs /usr/share/ca-certificates/7236.pem /etc/ssl/certs/7236.pem"
	I0318 13:48:32.193123    9750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7236.pem
	I0318 13:48:32.194629    9750 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:31 /usr/share/ca-certificates/7236.pem
	I0318 13:48:32.194649    9750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7236.pem
	I0318 13:48:32.196763    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7236.pem /etc/ssl/certs/51391683.0"
	I0318 13:48:32.199522    9750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:48:32.200899    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:48:32.202760    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:48:32.204865    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:48:32.206868    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:48:32.208607    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:48:32.210279    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:48:32.212187    9750 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 13:48:32.212252    9750 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:48:32.222241    9750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:48:32.225348    9750 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:48:32.225355    9750 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:48:32.225357    9750 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:48:32.225380    9750 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:48:32.228003    9750 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:48:32.228309    9750 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-813000" does not appear in /Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:48:32.228413    9750 kubeconfig.go:62] /Users/jenkins/minikube-integration/18421-6777/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-813000" cluster setting kubeconfig missing "stopped-upgrade-813000" context setting]
	I0318 13:48:32.228623    9750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/kubeconfig: {Name:mk6a62990bf9328d54440f15380010f8199a9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:48:32.229042    9750 kapi.go:59] client config for stopped-upgrade-813000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.key", CAFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105e86a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:48:32.229478    9750 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:48:32.232029    9750 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-813000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 13:48:32.232035    9750 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:48:32.232067    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:48:32.243008    9750 docker.go:483] Stopping containers: [d40fae90d1aa ba3504103d36 9353fb6ad2b7 9e22a05ae9a3 67619eb167c0 a67d887e308c d6a44a7b025e b531e5fe4674]
	I0318 13:48:32.243065    9750 ssh_runner.go:195] Run: docker stop d40fae90d1aa ba3504103d36 9353fb6ad2b7 9e22a05ae9a3 67619eb167c0 a67d887e308c d6a44a7b025e b531e5fe4674
	I0318 13:48:32.256084    9750 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:48:32.261863    9750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:48:32.265199    9750 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:48:32.265205    9750 kubeadm.go:156] found existing configuration files:
	
	I0318 13:48:32.265232    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/admin.conf
	I0318 13:48:32.267648    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:48:32.267671    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:48:32.270328    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/kubelet.conf
	I0318 13:48:32.272944    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:48:32.272963    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:48:32.275317    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/controller-manager.conf
	I0318 13:48:32.278231    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:48:32.278255    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:48:32.281383    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/scheduler.conf
	I0318 13:48:32.283866    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:48:32.283893    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:48:32.286651    9750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:48:32.289673    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.312776    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.702117    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.834775    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.857164    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.879218    9750 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:48:32.879302    9750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:48:35.040701    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:35.040790    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:35.054306    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:35.054374    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:35.065221    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:35.065286    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:35.076225    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:35.076288    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:35.087005    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:35.087079    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:35.097376    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:35.097436    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:35.108161    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:35.108236    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:35.118196    9587 logs.go:276] 0 containers: []
	W0318 13:48:35.118207    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:35.118255    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:35.128926    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:35.128941    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:35.128946    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:35.165493    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:35.165503    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:35.182635    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:35.182646    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:35.195920    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:35.195930    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:35.220673    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:35.220682    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:35.232185    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:35.232196    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:35.244889    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:35.244899    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:35.283323    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:35.283331    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:35.287402    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:35.287412    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:35.302620    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:35.302632    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:35.315410    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:35.315420    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:35.334495    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:35.334510    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:35.345858    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:35.345871    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:35.381352    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:35.381365    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:35.395781    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:35.395792    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:35.406884    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:35.406894    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:33.381379    9750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:48:33.881350    9750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:48:33.885317    9750 api_server.go:72] duration metric: took 1.006106166s to wait for apiserver process to appear ...
	I0318 13:48:33.885327    9750 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:48:33.885341    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:37.920645    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:38.887465    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:38.887533    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:42.922982    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:42.923101    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:42.935433    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:42.935506    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:42.946582    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:42.946656    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:42.958951    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:42.959012    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:42.972381    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:42.972457    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:42.984423    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:42.984492    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:42.995222    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:42.995285    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:43.006306    9587 logs.go:276] 0 containers: []
	W0318 13:48:43.006320    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:43.006376    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:43.016964    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:43.016981    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:43.016988    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:43.055613    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:43.055628    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:43.060643    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:43.060650    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:43.102346    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:43.102357    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:43.141865    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:43.141875    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:43.153479    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:43.153488    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:43.165534    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:43.165546    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:43.180490    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:43.180501    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:43.205036    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:43.205044    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:43.219592    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:43.219601    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:43.232829    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:43.232840    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:43.248089    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:43.248100    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:43.258822    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:43.258832    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:43.270283    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:43.270293    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:43.288238    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:43.288246    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:43.300204    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:43.300215    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:45.814601    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:43.887924    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:43.887966    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:50.816763    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:50.816890    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:50.827959    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:50.828032    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:50.839783    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:50.839857    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:50.851751    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:50.851821    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:50.863149    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:50.863280    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:50.875113    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:50.875182    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:50.887871    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:50.887948    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:50.899906    9587 logs.go:276] 0 containers: []
	W0318 13:48:50.899919    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:50.899983    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:50.910598    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:50.910615    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:50.910622    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:50.948621    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:50.948638    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:50.992643    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:50.992662    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:51.006116    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:51.006131    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:51.044067    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:51.044079    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:51.059327    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:51.059340    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:51.080150    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:51.080164    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:51.091926    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:51.091936    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:51.103589    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:51.103602    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:51.117689    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:51.117702    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:51.129661    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:51.129677    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:51.153941    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:51.153955    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:51.158738    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:51.158745    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:51.178395    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:51.178407    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:51.192378    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:51.192390    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:51.205782    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:51.205793    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:48.888359    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:48.888394    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:53.722930    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:53.888990    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:53.889092    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:58.725315    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:58.725541    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:48:58.749190    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:48:58.749305    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:48:58.764585    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:48:58.764666    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:48:58.776354    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:48:58.776428    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:48:58.787125    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:48:58.787198    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:48:58.797733    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:48:58.797804    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:48:58.812899    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:48:58.812966    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:48:58.823038    9587 logs.go:276] 0 containers: []
	W0318 13:48:58.823049    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:48:58.823107    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:48:58.833150    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:48:58.833166    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:48:58.833174    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:48:58.870860    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:48:58.870871    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:48:58.884770    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:48:58.884781    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:48:58.896801    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:48:58.896816    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:48:58.931671    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:48:58.931682    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:48:58.949303    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:48:58.949314    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:48:58.961469    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:48:58.961480    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:48:58.965678    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:48:58.965687    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:48:58.981079    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:48:58.981092    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:48:58.999396    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:48:58.999406    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:48:59.014288    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:48:59.014298    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:48:59.025648    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:48:59.025663    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:48:59.038012    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:48:59.038023    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:48:59.050694    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:48:59.050705    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:48:59.092407    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:48:59.092421    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:48:59.106448    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:48:59.106462    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:48:58.889975    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:58.889994    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:01.631559    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:03.890659    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:03.890751    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:06.633825    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:06.633916    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:06.644629    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:49:06.644701    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:06.655686    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:49:06.655750    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:06.668072    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:49:06.668139    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:06.678618    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:49:06.678678    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:06.689681    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:49:06.689755    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:06.701086    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:49:06.701149    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:06.711831    9587 logs.go:276] 0 containers: []
	W0318 13:49:06.711844    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:06.711895    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:06.722585    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:49:06.722605    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:49:06.722611    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:49:06.744927    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:49:06.744938    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:49:06.762187    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:49:06.762198    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:49:06.778348    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:49:06.778359    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:49:06.790659    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:06.790668    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:06.826204    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:06.826215    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:06.830635    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:49:06.830642    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:49:06.868475    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:49:06.868488    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:49:06.882854    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:49:06.882869    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:49:06.903882    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:06.903893    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:06.941703    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:49:06.941711    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:49:06.955651    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:49:06.955662    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:49:06.967142    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:49:06.967154    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:49:06.978906    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:06.978918    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:07.002943    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:49:07.002952    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:07.014752    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:49:07.014765    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:49:09.530877    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:08.892282    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:08.892342    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:14.531521    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:14.531673    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:14.548959    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:49:14.549054    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:14.562061    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:49:14.562132    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:14.578097    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:49:14.578164    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:14.594904    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:49:14.594968    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:14.605616    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:49:14.605680    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:14.616248    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:49:14.616318    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:14.628227    9587 logs.go:276] 0 containers: []
	W0318 13:49:14.628240    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:14.628296    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:14.640357    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:49:14.640376    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:49:14.640381    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:49:14.652838    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:14.652848    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:14.657555    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:49:14.657564    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:49:14.671566    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:49:14.671579    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:49:14.683107    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:49:14.683119    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:49:14.695997    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:49:14.696009    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:49:14.713135    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:49:14.713148    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:49:14.730729    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:49:14.730742    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:49:14.745550    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:14.745559    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:14.767853    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:49:14.767861    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:14.780420    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:49:14.780430    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:49:14.792269    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:49:14.792280    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:49:14.804472    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:14.804485    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:14.842984    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:14.843000    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:14.877279    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:49:14.877289    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:49:14.891104    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:49:14.891118    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:49:13.894043    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:13.894124    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:17.437205    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:18.897211    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:18.897318    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:22.439810    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:22.440274    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:22.479255    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:49:22.479416    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:22.500509    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:49:22.500598    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:22.515834    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:49:22.515899    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:22.528445    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:49:22.528507    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:22.539869    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:49:22.539933    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:22.551164    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:49:22.551233    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:22.566577    9587 logs.go:276] 0 containers: []
	W0318 13:49:22.566589    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:22.566649    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:22.578708    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:49:22.578729    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:22.578735    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:22.603117    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:22.603139    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:22.643628    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:49:22.643640    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:49:22.662379    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:49:22.662392    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:49:22.675322    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:49:22.675334    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:49:22.692997    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:49:22.693010    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:49:22.711413    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:49:22.711425    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:49:22.726994    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:49:22.727004    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:49:22.738864    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:22.738878    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:22.776497    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:22.776507    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:22.780989    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:49:22.780997    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:49:22.799388    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:49:22.799402    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:49:22.837091    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:49:22.837101    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:49:22.848802    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:49:22.848812    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:49:22.863268    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:49:22.863277    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:49:22.875685    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:49:22.875696    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:25.392857    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:23.898848    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:23.898892    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:30.395127    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:30.395313    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:30.419343    9587 logs.go:276] 2 containers: [4110972b2abf 84cd5d05ad71]
	I0318 13:49:30.419445    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:30.435698    9587 logs.go:276] 2 containers: [6b481c08dcd0 00cfc4402308]
	I0318 13:49:30.435775    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:30.448638    9587 logs.go:276] 1 containers: [a10cdfea70cf]
	I0318 13:49:30.448706    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:30.460306    9587 logs.go:276] 2 containers: [129e247fa624 4f52d8f210c8]
	I0318 13:49:30.460371    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:30.470636    9587 logs.go:276] 1 containers: [17e18df8cbd7]
	I0318 13:49:30.470707    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:30.480984    9587 logs.go:276] 2 containers: [2500731e9ecf aa2b472eda1e]
	I0318 13:49:30.481047    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:30.491396    9587 logs.go:276] 0 containers: []
	W0318 13:49:30.491408    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:30.491468    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:30.502174    9587 logs.go:276] 1 containers: [a50dbbec77c8]
	I0318 13:49:30.502191    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:30.502197    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:30.506796    9587 logs.go:123] Gathering logs for coredns [a10cdfea70cf] ...
	I0318 13:49:30.506803    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10cdfea70cf"
	I0318 13:49:30.518703    9587 logs.go:123] Gathering logs for kube-scheduler [4f52d8f210c8] ...
	I0318 13:49:30.518715    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f52d8f210c8"
	I0318 13:49:30.533090    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:30.533100    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:30.572021    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:30.572030    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:30.607361    9587 logs.go:123] Gathering logs for kube-scheduler [129e247fa624] ...
	I0318 13:49:30.607372    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 129e247fa624"
	I0318 13:49:30.631975    9587 logs.go:123] Gathering logs for kube-proxy [17e18df8cbd7] ...
	I0318 13:49:30.631987    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17e18df8cbd7"
	I0318 13:49:30.653578    9587 logs.go:123] Gathering logs for kube-controller-manager [2500731e9ecf] ...
	I0318 13:49:30.653591    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2500731e9ecf"
	I0318 13:49:30.675102    9587 logs.go:123] Gathering logs for storage-provisioner [a50dbbec77c8] ...
	I0318 13:49:30.675113    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a50dbbec77c8"
	I0318 13:49:30.686725    9587 logs.go:123] Gathering logs for kube-apiserver [4110972b2abf] ...
	I0318 13:49:30.686741    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4110972b2abf"
	I0318 13:49:30.703955    9587 logs.go:123] Gathering logs for kube-apiserver [84cd5d05ad71] ...
	I0318 13:49:30.703966    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84cd5d05ad71"
	I0318 13:49:30.746005    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:30.746015    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:30.770193    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:49:30.770202    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:30.781781    9587 logs.go:123] Gathering logs for etcd [6b481c08dcd0] ...
	I0318 13:49:30.781793    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b481c08dcd0"
	I0318 13:49:30.796562    9587 logs.go:123] Gathering logs for etcd [00cfc4402308] ...
	I0318 13:49:30.796571    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00cfc4402308"
	I0318 13:49:30.811067    9587 logs.go:123] Gathering logs for kube-controller-manager [aa2b472eda1e] ...
	I0318 13:49:30.811076    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2b472eda1e"
	I0318 13:49:28.899226    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:28.899269    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:33.324170    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:33.901537    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:33.901754    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:33.917526    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:49:33.917590    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:33.930319    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:49:33.930401    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:33.941218    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:49:33.941286    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:33.954420    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:49:33.954500    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:33.964735    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:49:33.964801    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:33.975779    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:49:33.975844    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:33.985814    9750 logs.go:276] 0 containers: []
	W0318 13:49:33.985825    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:33.985893    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:34.000256    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:49:34.000278    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:34.000283    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:34.111913    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:49:34.111927    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:49:34.126336    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:34.126345    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:34.164971    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:34.164985    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:34.169516    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:34.169523    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:34.194419    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:49:34.194427    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:49:34.209287    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:49:34.209298    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:49:34.253713    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:49:34.253725    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:49:34.268382    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:49:34.268392    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:49:34.283560    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:49:34.283569    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:49:34.301543    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:49:34.301554    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:49:34.317456    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:49:34.317465    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:34.329324    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:49:34.329335    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:49:34.340878    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:49:34.340889    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:49:34.353721    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:49:34.353739    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:49:34.369455    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:49:34.369465    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:49:36.887084    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:38.326420    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:38.326561    9587 kubeadm.go:591] duration metric: took 4m4.416703375s to restartPrimaryControlPlane
	W0318 13:49:38.326696    9587 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:49:38.326741    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 13:49:39.401167    9587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.074415666s)
	I0318 13:49:39.401233    9587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:49:39.406371    9587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:39.409276    9587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:39.412097    9587 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:39.412102    9587 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:39.412123    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/admin.conf
	I0318 13:49:39.414486    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:39.414505    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:39.417585    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/kubelet.conf
	I0318 13:49:39.420846    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:39.420866    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:39.423490    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/controller-manager.conf
	I0318 13:49:39.425984    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:39.426007    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:39.429123    9587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/scheduler.conf
	I0318 13:49:39.432029    9587 kubeadm.go:162] "https://control-plane.minikube.internal:51166" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51166 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:39.432052    9587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:39.434704    9587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:49:39.450443    9587 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 13:49:39.450472    9587 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:49:39.506544    9587 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:49:39.506635    9587 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:49:39.506684    9587 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:49:39.555676    9587 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:49:39.562897    9587 out.go:204]   - Generating certificates and keys ...
	I0318 13:49:39.562936    9587 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:49:39.562965    9587 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:49:39.563004    9587 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:49:39.563040    9587 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:49:39.563074    9587 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:49:39.563106    9587 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:49:39.563139    9587 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:49:39.563180    9587 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:49:39.563220    9587 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:49:39.563258    9587 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:49:39.563285    9587 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:49:39.563319    9587 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:49:39.627338    9587 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:49:39.687285    9587 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:49:39.754816    9587 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:49:39.870379    9587 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:49:39.901926    9587 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:49:39.902335    9587 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:49:39.902449    9587 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:49:39.991377    9587 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:49:39.995847    9587 out.go:204]   - Booting up control plane ...
	I0318 13:49:39.995915    9587 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:49:39.995965    9587 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:49:39.996042    9587 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:49:39.996087    9587 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:49:39.996188    9587 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:49:41.887388    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:41.887524    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:41.899423    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:49:41.899498    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:41.910366    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:49:41.910443    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:41.921573    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:49:41.921647    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:41.933219    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:49:41.933290    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:41.944497    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:49:41.944590    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:41.956346    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:49:41.956413    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:41.967469    9750 logs.go:276] 0 containers: []
	W0318 13:49:41.967482    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:41.967540    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:41.978949    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:49:41.978967    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:49:41.978973    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:49:41.994576    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:49:41.994595    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:49:42.009897    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:49:42.009908    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:49:42.028106    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:49:42.028118    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:49:42.043154    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:42.043170    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:42.047708    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:49:42.047719    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:49:42.090956    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:42.090984    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:42.117367    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:49:42.117390    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:42.130026    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:42.130037    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:42.172834    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:42.172853    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:42.214090    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:49:42.214104    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:49:42.230403    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:49:42.230424    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:49:42.245125    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:49:42.245142    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:49:42.257127    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:49:42.257140    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:49:42.273633    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:49:42.273643    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:49:42.286473    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:49:42.286489    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:49:44.497549    9587 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504757 seconds
	I0318 13:49:44.497708    9587 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:49:44.502177    9587 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:49:45.028121    9587 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:49:45.028488    9587 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-647000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:49:45.533017    9587 kubeadm.go:309] [bootstrap-token] Using token: vlu6oa.7j8asp2g3j3jbv2u
	I0318 13:49:45.538781    9587 out.go:204]   - Configuring RBAC rules ...
	I0318 13:49:45.538843    9587 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:49:45.538905    9587 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:49:45.542214    9587 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:49:45.543200    9587 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:49:45.544154    9587 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:49:45.545106    9587 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:49:45.548530    9587 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:49:45.716140    9587 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:49:45.938845    9587 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:49:45.939229    9587 kubeadm.go:309] 
	I0318 13:49:45.939262    9587 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:49:45.939266    9587 kubeadm.go:309] 
	I0318 13:49:45.939305    9587 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:49:45.939308    9587 kubeadm.go:309] 
	I0318 13:49:45.939319    9587 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:49:45.939359    9587 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:49:45.939386    9587 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:49:45.939404    9587 kubeadm.go:309] 
	I0318 13:49:45.939431    9587 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:49:45.939434    9587 kubeadm.go:309] 
	I0318 13:49:45.939457    9587 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:49:45.939461    9587 kubeadm.go:309] 
	I0318 13:49:45.939523    9587 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:49:45.939578    9587 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:49:45.939620    9587 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:49:45.939625    9587 kubeadm.go:309] 
	I0318 13:49:45.939674    9587 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:49:45.939730    9587 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:49:45.939741    9587 kubeadm.go:309] 
	I0318 13:49:45.939801    9587 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vlu6oa.7j8asp2g3j3jbv2u \
	I0318 13:49:45.939856    9587 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f245f57130bb8b4395382cd74200f36af238eb522c12e31804ffbb421429194 \
	I0318 13:49:45.939867    9587 kubeadm.go:309] 	--control-plane 
	I0318 13:49:45.939870    9587 kubeadm.go:309] 
	I0318 13:49:45.939914    9587 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:49:45.939916    9587 kubeadm.go:309] 
	I0318 13:49:45.939983    9587 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vlu6oa.7j8asp2g3j3jbv2u \
	I0318 13:49:45.940060    9587 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f245f57130bb8b4395382cd74200f36af238eb522c12e31804ffbb421429194 
	I0318 13:49:45.940121    9587 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:49:45.940129    9587 cni.go:84] Creating CNI manager for ""
	I0318 13:49:45.940137    9587 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:49:45.949774    9587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:49:45.953951    9587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:49:45.957096    9587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:49:45.961925    9587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:49:45.961966    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:49:45.962028    9587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-647000 minikube.k8s.io/updated_at=2024_03_18T13_49_45_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=running-upgrade-647000 minikube.k8s.io/primary=true
	I0318 13:49:46.000905    9587 kubeadm.go:1107] duration metric: took 38.971791ms to wait for elevateKubeSystemPrivileges
	I0318 13:49:46.000910    9587 ops.go:34] apiserver oom_adj: -16
	W0318 13:49:46.001005    9587 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:49:46.001010    9587 kubeadm.go:393] duration metric: took 4m12.105172333s to StartCluster
	I0318 13:49:46.001024    9587 settings.go:142] acquiring lock: {Name:mkb16a292265123b9734bd031ef06799b38c3f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:46.001177    9587 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:49:46.001579    9587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/kubeconfig: {Name:mk6a62990bf9328d54440f15380010f8199a9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:46.001793    9587 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:49:46.005773    9587 out.go:177] * Verifying Kubernetes components...
	I0318 13:49:46.001815    9587 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:49:46.001961    9587 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:49:46.013812    9587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:46.013827    9587 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-647000"
	I0318 13:49:46.013851    9587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-647000"
	I0318 13:49:46.013837    9587 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-647000"
	I0318 13:49:46.013866    9587 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-647000"
	W0318 13:49:46.013871    9587 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:49:46.013880    9587 host.go:66] Checking if "running-upgrade-647000" exists ...
	I0318 13:49:46.015149    9587 kapi.go:59] client config for running-upgrade-647000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/running-upgrade-647000/client.key", CAFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105bcea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:49:46.015741    9587 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-647000"
	W0318 13:49:46.015746    9587 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:49:46.015753    9587 host.go:66] Checking if "running-upgrade-647000" exists ...
	I0318 13:49:46.019869    9587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:46.022855    9587 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:49:46.022860    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:49:46.022867    9587 sshutil.go:53] new ssh client: &{IP:localhost Port:51134 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0318 13:49:46.023731    9587 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:49:46.023736    9587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:49:46.023740    9587 sshutil.go:53] new ssh client: &{IP:localhost Port:51134 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/running-upgrade-647000/id_rsa Username:docker}
	I0318 13:49:46.100840    9587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:46.105918    9587 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:46.105955    9587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:46.110444    9587 api_server.go:72] duration metric: took 108.638333ms to wait for apiserver process to appear ...
	I0318 13:49:46.110451    9587 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:49:46.110459    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:46.120745    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:49:46.122396    9587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:49:44.802088    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:51.112656    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:51.112734    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:49.804084    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:49.804358    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:49.835272    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:49:49.835391    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:49.851369    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:49:49.851458    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:49.864555    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:49:49.864630    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:49.875598    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:49:49.875668    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:49.885795    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:49:49.885860    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:49.896285    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:49:49.896369    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:49.906331    9750 logs.go:276] 0 containers: []
	W0318 13:49:49.906342    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:49.906403    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:49.916753    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:49:49.916772    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:49:49.916777    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:49:49.930452    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:49:49.930462    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:49:49.968521    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:49.968532    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:50.004829    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:49:50.004839    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:49:50.016445    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:49:50.016456    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:49:50.028324    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:50.028335    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:50.054094    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:49:50.054107    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:50.066269    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:50.066279    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:50.070996    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:49:50.071006    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:49:50.088803    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:49:50.088813    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:49:50.100467    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:49:50.100477    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:49:50.115973    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:49:50.115983    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:49:50.133205    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:50.133217    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:50.171234    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:49:50.171254    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:49:50.185639    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:49:50.185649    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:49:50.200354    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:49:50.200366    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:49:52.714196    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:56.113239    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:56.113264    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:57.716780    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:57.716894    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:57.727357    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:49:57.727430    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:57.738075    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:49:57.738150    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:57.749312    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:49:57.749379    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:57.777507    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:49:57.777586    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:57.788160    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:49:57.788234    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:57.798798    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:49:57.798867    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:57.816971    9750 logs.go:276] 0 containers: []
	W0318 13:49:57.816981    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:57.817037    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:57.829185    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:49:57.829204    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:57.829209    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:57.871737    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:57.871757    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:57.875937    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:57.875943    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:57.911110    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:49:57.911121    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:49:57.923017    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:49:57.923028    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:49:57.934975    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:49:57.934987    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:57.947397    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:49:57.947409    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:49:57.962141    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:49:57.962152    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:49:57.976407    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:49:57.976417    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:49:57.988789    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:49:57.988801    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:49:58.003610    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:49:58.003623    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:49:58.043729    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:49:58.043745    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:49:58.062732    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:49:58.062742    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:49:58.078882    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:49:58.078892    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:49:58.091186    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:49:58.091196    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:49:58.108671    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:58.108686    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:01.113658    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:01.113734    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:00.632323    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:06.114270    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:06.114288    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:05.634665    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:05.634928    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:05.662499    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:05.662608    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:05.680648    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:05.680729    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:05.693549    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:05.693637    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:05.705825    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:05.705895    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:05.716704    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:05.716774    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:05.727166    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:05.727237    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:05.742883    9750 logs.go:276] 0 containers: []
	W0318 13:50:05.742895    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:05.742958    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:05.755177    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:05.755194    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:05.755200    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:05.767163    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:05.767173    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:05.778616    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:05.778626    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:05.816788    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:05.816802    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:05.838101    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:05.838111    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:05.853453    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:05.853465    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:05.866243    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:05.866254    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:05.905315    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:05.905326    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:05.909454    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:05.909461    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:05.923452    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:05.923463    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:05.943090    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:05.943099    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:05.957977    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:05.957988    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:05.996316    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:05.996325    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:06.011404    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:06.011415    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:06.022452    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:06.022461    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:06.045997    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:06.046006    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:11.114905    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:11.114936    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:08.559193    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:16.115810    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:16.115838    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 13:50:16.460270    9587 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 13:50:16.466390    9587 out.go:177] * Enabled addons: storage-provisioner
	I0318 13:50:16.477311    9587 addons.go:505] duration metric: took 30.47566075s for enable addons: enabled=[storage-provisioner]
	I0318 13:50:13.561191    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:13.561336    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:13.573673    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:13.573733    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:13.584404    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:13.584469    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:13.594504    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:13.594558    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:13.609387    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:13.609454    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:13.620174    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:13.620242    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:13.630241    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:13.630306    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:13.640931    9750 logs.go:276] 0 containers: []
	W0318 13:50:13.640943    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:13.641001    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:13.651565    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:13.651581    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:13.651587    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:13.668160    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:13.668169    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:13.705632    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:13.705645    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:13.710263    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:13.710274    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:13.724387    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:13.724400    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:13.737852    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:13.737863    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:13.749836    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:13.749846    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:13.761444    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:13.761460    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:13.781613    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:13.781627    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:13.793894    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:13.793907    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:13.809896    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:13.809907    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:13.834349    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:13.834356    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:13.870820    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:13.870832    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:13.885271    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:13.885281    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:13.923583    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:13.923594    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:13.935233    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:13.935244    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:16.449581    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:21.116899    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:21.116922    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:21.451902    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:21.452068    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:21.466889    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:21.466969    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:21.479091    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:21.479159    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:21.491378    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:21.491441    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:21.501859    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:21.501928    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:21.512634    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:21.512702    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:21.523090    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:21.523159    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:21.532854    9750 logs.go:276] 0 containers: []
	W0318 13:50:21.532865    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:21.532920    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:21.543870    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:21.543889    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:21.543894    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:21.558202    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:21.558215    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:21.570083    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:21.570095    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:21.584334    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:21.584345    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:21.588959    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:21.588968    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:21.628012    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:21.628024    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:21.640167    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:21.640181    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:21.651381    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:21.651391    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:21.663243    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:21.663254    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:21.701880    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:21.701891    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:21.716142    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:21.716152    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:21.731223    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:21.731235    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:21.755390    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:21.755399    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:21.766748    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:21.766760    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:21.804586    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:21.804599    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:21.820119    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:21.820133    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:26.117554    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:26.117603    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:24.342803    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:31.119229    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:31.119278    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:29.345131    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:29.345316    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:29.358688    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:29.358775    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:29.369710    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:29.369775    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:29.380216    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:29.380285    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:29.391104    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:29.391172    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:29.401073    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:29.401145    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:29.411550    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:29.411610    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:29.421799    9750 logs.go:276] 0 containers: []
	W0318 13:50:29.421817    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:29.421874    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:29.432088    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:29.432107    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:29.432112    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:29.443358    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:29.443368    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:29.467601    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:29.467613    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:29.504820    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:29.504831    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:29.521350    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:29.521362    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:29.535755    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:29.535767    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:29.546788    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:29.546801    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:29.562298    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:29.562311    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:29.580470    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:29.580481    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:29.594955    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:29.594967    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:29.599179    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:29.599189    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:29.617412    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:29.617424    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:29.659133    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:29.659144    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:29.670775    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:29.670786    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:29.682412    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:29.682422    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:29.720620    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:29.720629    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:32.233804    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:36.121340    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:36.121379    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:37.236175    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:37.236430    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:37.272550    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:37.272651    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:37.287877    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:37.287961    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:37.300882    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:37.300950    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:37.311610    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:37.311682    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:37.322354    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:37.322422    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:37.333036    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:37.333104    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:37.351702    9750 logs.go:276] 0 containers: []
	W0318 13:50:37.351716    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:37.351773    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:37.362396    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:37.362412    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:37.362420    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:37.399111    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:37.399123    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:37.403155    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:37.403163    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:37.439108    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:37.439121    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:37.456905    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:37.456918    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:37.468859    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:37.468873    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:37.484177    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:37.484187    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:37.495905    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:37.495917    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:37.509357    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:37.509367    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:37.523946    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:37.523958    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:37.537848    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:37.537860    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:37.552959    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:37.552971    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:37.564562    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:37.564572    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:37.602359    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:37.602370    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:37.614080    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:37.614091    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:37.626429    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:37.626440    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:41.123450    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:41.123494    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:40.151324    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:46.124813    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:46.124907    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:46.136879    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:50:46.136949    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:46.148041    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:50:46.148111    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:46.158586    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:50:46.158661    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:46.169680    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:50:46.169745    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:46.180319    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:50:46.180390    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:46.191029    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:50:46.191094    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:46.201019    9587 logs.go:276] 0 containers: []
	W0318 13:50:46.201030    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:46.201087    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:46.211373    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:50:46.211387    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:50:46.211393    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:46.223480    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:46.223495    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:46.258010    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:50:46.258021    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:50:46.272365    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:50:46.272377    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:50:46.283747    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:50:46.283758    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:50:46.295096    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:50:46.295107    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:50:46.306583    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:50:46.306595    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:50:46.324255    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:46.324265    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:46.348175    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:46.348183    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:46.352529    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:46.352539    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:46.396503    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:50:46.396515    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:50:46.411127    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:50:46.411139    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:50:46.423031    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:50:46.423042    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:50:45.153646    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:45.153806    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:45.173748    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:45.173841    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:45.187257    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:45.187335    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:45.198731    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:45.198804    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:45.209908    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:45.209984    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:45.220274    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:45.220342    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:45.232567    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:45.232637    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:45.243069    9750 logs.go:276] 0 containers: []
	W0318 13:50:45.243080    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:45.243136    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:45.254275    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:45.254293    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:45.254298    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:45.294935    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:45.294945    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:45.308948    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:45.308958    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:45.321082    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:45.321092    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:45.332832    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:45.332843    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:45.370946    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:45.370955    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:45.385498    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:45.385510    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:45.401365    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:45.401374    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:45.419004    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:45.419014    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:45.443288    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:45.443295    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:45.457288    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:45.457300    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:45.495375    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:45.495386    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:45.507604    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:45.507614    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:45.511772    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:45.511779    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:45.523209    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:45.523219    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:45.538078    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:45.538088    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:48.051347    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:48.939322    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:53.053649    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:53.054073    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:53.094379    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:53.094520    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:53.116689    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:53.116783    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:53.132728    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:53.132809    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:53.145642    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:53.145714    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:53.156896    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:53.156971    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:53.171067    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:53.171144    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:53.182373    9750 logs.go:276] 0 containers: []
	W0318 13:50:53.182385    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:53.182446    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:53.193310    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:53.193329    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:53.193335    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:53.207781    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:53.207791    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:53.219653    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:53.219665    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:53.235917    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:53.235930    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:53.272948    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:53.272958    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:53.940523    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:53.940725    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:53.965096    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:50:53.965209    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:53.981326    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:50:53.981405    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:53.994768    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:50:53.994834    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:54.006171    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:50:54.006237    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:54.016552    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:50:54.016616    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:54.027294    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:50:54.027359    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:54.037138    9587 logs.go:276] 0 containers: []
	W0318 13:50:54.037149    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:54.037199    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:54.047917    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:50:54.047933    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:50:54.047938    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:50:54.065328    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:54.065339    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:54.090364    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:50:54.090380    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:54.101683    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:54.101694    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:54.136450    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:50:54.136461    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:50:54.150776    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:50:54.150789    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:50:54.165528    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:50:54.165539    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:50:54.178731    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:50:54.178741    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:50:54.190752    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:50:54.190765    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:50:54.209160    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:50:54.209171    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:50:54.220732    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:50:54.220745    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:50:54.233553    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:54.233564    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:54.237925    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:54.237932    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:53.284030    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:53.284040    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:53.295548    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:53.295559    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:53.313575    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:53.313585    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:53.329474    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:53.329484    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:53.352712    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:53.352720    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:53.388668    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:53.388679    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:53.404591    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:53.404601    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:53.408704    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:53.408710    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:53.425484    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:53.425494    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:53.444655    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:53.444664    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:53.456847    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:53.456856    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:55.994559    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:56.784008    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:00.996868    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:00.996995    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:01.010790    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:01.010862    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:01.024097    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:01.024177    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:01.040629    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:01.040692    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:01.052445    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:01.052506    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:01.062362    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:01.062429    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:01.077649    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:01.077724    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:01.090014    9750 logs.go:276] 0 containers: []
	W0318 13:51:01.090027    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.090079    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:01.100685    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:01.100701    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:01.100706    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:01.115034    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:01.115045    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:01.132728    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:01.132739    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:01.144597    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.144609    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.182876    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.182888    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.187409    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:01.187415    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:01.228077    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:01.228091    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:01.243118    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:01.243129    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:01.267245    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:01.267255    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:01.280996    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:01.281005    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:01.296364    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:01.296375    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:01.309838    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.309850    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:01.345316    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:01.345326    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:01.361706    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:01.361723    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:01.373849    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:01.373862    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:01.385274    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:01.385283    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:01.786248    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:01.786450    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:01.808316    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:01.808412    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:01.821445    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:01.821520    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:01.833354    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:01.833423    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:01.843388    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:01.843456    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:01.853492    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:01.853563    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:01.864400    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:01.864468    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:01.874287    9587 logs.go:276] 0 containers: []
	W0318 13:51:01.874300    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.874365    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:01.885037    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:01.885053    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:01.885059    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:01.898504    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:01.898514    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:01.910597    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:01.910610    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:01.925433    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:01.925445    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:01.939793    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.939803    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.973903    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.973912    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.978624    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.978632    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:02.014681    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:02.014691    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:02.029482    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:02.029493    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:02.041109    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:02.041120    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:02.053096    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:02.053106    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:02.070490    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:02.070499    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:02.081604    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:02.081616    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:04.605165    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:03.899622    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:09.607602    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:09.607757    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:09.625094    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:09.625167    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:09.637216    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:09.637290    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:09.647861    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:09.647924    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:09.657928    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:09.657985    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:09.668084    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:09.668155    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:09.678238    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:09.678298    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:09.691888    9587 logs.go:276] 0 containers: []
	W0318 13:51:09.691901    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:09.691948    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:09.701988    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:09.702001    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:09.702006    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:09.735336    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:09.735345    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:09.769474    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:09.769485    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:09.784148    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:09.784159    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:09.798820    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:09.798834    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:09.810156    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:09.810169    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:09.824919    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:09.824929    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:09.836650    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:09.836663    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:09.856643    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:09.856653    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:09.868096    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:09.868109    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:09.873164    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:09.873173    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:09.888091    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:09.888101    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:09.899066    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:09.899076    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:08.900760    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:08.901015    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:08.926122    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:08.926243    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:08.947264    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:08.947341    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:08.959585    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:08.959657    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:08.970428    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:08.970502    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:08.980473    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:08.980536    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:08.990826    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:08.990889    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:09.001224    9750 logs.go:276] 0 containers: []
	W0318 13:51:09.001238    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:09.001291    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:09.011445    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:09.011465    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:09.011471    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:09.023313    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:09.023326    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:09.038812    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:09.038824    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:09.050709    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:09.050719    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:09.085428    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:09.085438    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:09.099538    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:09.099548    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:09.113847    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:09.113860    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:09.131111    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:09.131122    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:09.155289    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:09.155300    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:09.159690    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:09.159699    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:09.198993    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:09.199004    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:09.214543    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:09.214556    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:09.254107    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:09.254127    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:09.270407    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:09.270417    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:09.282602    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:09.282613    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:09.294201    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:09.294213    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:11.810955    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:12.423378    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:16.813304    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:16.813548    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:16.833411    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:16.833494    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:16.847612    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:16.847691    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:16.859512    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:16.859581    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:16.870275    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:16.870342    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:16.880540    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:16.880607    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:16.890862    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:16.890926    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:16.900626    9750 logs.go:276] 0 containers: []
	W0318 13:51:16.900637    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:16.900690    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:16.910872    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:16.910888    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:16.910893    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:16.922697    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:16.922711    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:16.947029    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:16.947036    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:16.951239    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:16.951247    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:16.989108    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:16.989119    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:17.000520    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:17.000532    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:17.015097    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:17.015107    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:17.026783    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.026794    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:17.060796    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:17.060808    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:17.072016    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:17.072028    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:17.097026    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:17.097038    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:17.114468    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.114483    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.150946    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:17.150954    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:17.166296    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:17.166306    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:17.184469    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.184482    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.196324    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:17.196335    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:17.423971    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:17.424090    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:17.436582    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:17.436681    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:17.446881    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:17.446938    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:17.457173    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:17.457232    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:17.467601    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:17.467670    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:17.478488    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:17.478560    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:17.489005    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:17.489066    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:17.499042    9587 logs.go:276] 0 containers: []
	W0318 13:51:17.499053    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:17.499107    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:17.509145    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:17.509160    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:17.509165    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:17.523889    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:17.523899    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:17.535235    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:17.535245    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:17.549616    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:17.549626    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:17.561019    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:17.561030    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:17.584864    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.584873    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.618121    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:17.618135    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:17.622415    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:17.622423    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:17.636933    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.636943    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.648335    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:17.648345    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:17.665647    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.665660    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:17.700445    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:17.700457    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:17.715404    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:17.715418    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:20.228985    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:19.711720    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:25.231178    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:25.231284    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:25.242775    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:25.242846    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:25.252983    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:25.253053    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:25.263262    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:25.263330    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:25.273489    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:25.273548    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:25.284243    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:25.284321    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:25.294244    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:25.294313    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:25.304123    9587 logs.go:276] 0 containers: []
	W0318 13:51:25.304134    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:25.304190    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:25.315928    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:25.315943    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:25.315949    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:25.320951    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:25.320958    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:25.337553    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:25.337564    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:25.355155    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:25.355169    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:25.367384    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:25.367394    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:25.378818    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:25.378829    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:25.393716    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:25.393725    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:25.405328    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:25.405338    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:25.430069    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:25.430077    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:25.463750    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:25.463759    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:25.505844    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:25.505856    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:25.519967    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:25.519977    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:25.541856    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:25.541865    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:24.714026    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:24.714253    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:24.730378    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:24.730470    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:24.743512    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:24.743586    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:24.754198    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:24.754270    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:24.765067    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:24.765125    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:24.775502    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:24.775559    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:24.786658    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:24.786724    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:24.796959    9750 logs.go:276] 0 containers: []
	W0318 13:51:24.796971    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:24.797026    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:24.807086    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:24.807110    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:24.807116    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:24.845411    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:24.845420    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:24.857570    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:24.857580    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:24.872438    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:24.872448    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:24.883696    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:24.883706    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:24.898834    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:24.898844    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:24.934754    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:24.934766    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:24.949144    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:24.949157    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:24.960425    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:24.960438    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:24.977334    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:24.977345    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:24.981913    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:24.981919    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:24.996101    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:24.996111    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:25.007343    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:25.007353    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:25.031070    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:25.031077    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:25.044912    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:25.044925    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:25.084411    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:25.084422    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:27.598117    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:28.058500    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:32.600777    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:32.600891    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:32.616018    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:32.616090    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:32.626265    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:32.626337    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:32.636594    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:32.636663    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:32.647138    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:32.647207    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:32.660627    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:32.660698    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:32.670997    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:32.671068    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:32.681304    9750 logs.go:276] 0 containers: []
	W0318 13:51:32.681315    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:32.681371    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:32.697415    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:32.697435    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:32.697441    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:32.711333    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:32.711345    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:32.726792    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:32.726805    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:32.738794    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:32.738806    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:32.773219    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:32.773231    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:32.785452    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:32.785464    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:32.802997    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:32.803008    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:32.814526    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:32.814536    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:32.853289    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:32.853300    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:32.869168    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:32.869178    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:32.906987    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:32.906997    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:32.911164    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:32.911170    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:32.924844    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:32.924854    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:32.936772    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:32.936782    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:32.952300    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:32.952312    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:32.967088    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:32.967097    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:33.060743    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:33.060865    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:33.071862    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:33.071928    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:33.082051    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:33.082113    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:33.092444    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:33.092508    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:33.107384    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:33.107452    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:33.118455    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:33.118518    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:33.128757    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:33.128875    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:33.139271    9587 logs.go:276] 0 containers: []
	W0318 13:51:33.139280    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:33.139331    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:33.149521    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:33.149533    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:33.149538    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:33.161210    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:33.161220    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:33.186755    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:33.186766    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:33.199365    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:33.199377    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:33.234242    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:33.234257    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:33.251048    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:33.251059    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:33.265735    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:33.265746    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:33.277331    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:33.277341    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:33.294814    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:33.294823    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:33.299538    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:33.299548    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:33.334148    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:33.334157    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:33.348393    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:33.348403    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:33.360161    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:33.360173    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:35.874193    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:35.492626    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:40.876576    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:40.876709    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:40.887944    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:40.888014    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:40.902673    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:40.902741    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:40.912887    9587 logs.go:276] 2 containers: [16c60d7d510f 61927732b548]
	I0318 13:51:40.912949    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:40.923067    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:40.923136    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:40.939650    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:40.939722    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:40.950053    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:40.950118    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:40.967247    9587 logs.go:276] 0 containers: []
	W0318 13:51:40.967262    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:40.967322    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:40.978857    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:40.978875    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:40.978881    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:40.990750    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:40.990761    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:41.008351    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:41.008362    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:41.019809    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:41.019821    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:41.045304    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:41.045315    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:41.080297    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:41.080307    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:41.094706    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:41.094715    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:41.106443    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:41.106452    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:41.120777    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:41.120785    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:41.132145    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:41.132156    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:41.166829    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:41.166848    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:41.171411    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:41.171419    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:41.194822    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:41.194838    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:40.494873    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:40.495010    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:40.506825    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:40.506902    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:40.518193    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:40.518260    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:40.529117    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:40.529189    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:40.539593    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:40.539667    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:40.550190    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:40.550260    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:40.560885    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:40.560957    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:40.572333    9750 logs.go:276] 0 containers: []
	W0318 13:51:40.572343    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:40.572402    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:40.582832    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:40.582852    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:40.582858    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:40.621432    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:40.621441    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:40.635934    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:40.635944    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:40.646737    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:40.646749    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:40.664985    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:40.664996    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:40.687261    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:40.687269    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:40.699533    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:40.699544    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:40.713837    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:40.713852    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:40.725668    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:40.725678    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:40.742938    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:40.742948    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:40.754866    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:40.754877    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:40.768869    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:40.768879    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:40.780285    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:40.780298    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:40.784279    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:40.784285    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:40.817319    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:40.817331    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:40.854749    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:40.854759    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:43.710190    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:43.371837    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:48.710883    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:48.710964    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:48.727279    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:48.727349    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:48.746761    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:48.746831    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:48.759384    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:51:48.759460    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:48.772300    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:48.772375    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:48.783638    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:48.783715    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:48.795201    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:48.795277    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:48.806441    9587 logs.go:276] 0 containers: []
	W0318 13:51:48.806453    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.806505    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:48.817085    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:48.817100    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:48.817107    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:48.828826    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.828838    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.840425    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.840437    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:48.876982    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:48.876996    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:48.890732    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:48.890743    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:48.908667    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:51:48.908681    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:51:48.920211    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:51:48.920225    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:51:48.931334    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:48.931345    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:48.942456    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:48.942466    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:48.965683    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.965690    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.969760    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:48.969765    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:48.984939    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:48.984951    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:48.996501    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:48.996512    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:49.008372    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:49.008382    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:49.042924    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:49.042935    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:51.559369    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:48.374131    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:48.374268    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:48.387344    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:48.387412    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:48.401600    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:48.401673    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:48.412231    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:48.412298    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:48.422470    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:48.422536    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:48.436808    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:48.436866    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:48.447610    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:48.447688    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:48.459108    9750 logs.go:276] 0 containers: []
	W0318 13:51:48.459119    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.459179    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:48.485112    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:48.485133    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:48.485138    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:48.527278    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.527298    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:48.562762    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:48.562774    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:48.581394    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:48.581405    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:48.596124    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:48.596134    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:48.614665    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:48.614678    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:48.629110    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:48.629121    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:48.640001    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:48.640011    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:48.651424    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.651434    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.655484    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:48.655489    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:48.693169    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:48.693182    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:48.709012    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:48.709022    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:48.721690    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:48.721703    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:48.747606    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:48.747617    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:48.760297    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:48.760305    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:48.783818    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.783828    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:51.298396    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:56.561735    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:56.561825    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:56.573522    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:51:56.573594    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:56.584593    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:51:56.584663    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:56.300725    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:56.300901    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:56.317115    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:56.317197    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:56.332718    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:56.332787    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:56.342950    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:56.343021    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:56.353644    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:56.353714    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:56.364495    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:56.364580    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:56.375161    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:56.375226    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:56.385500    9750 logs.go:276] 0 containers: []
	W0318 13:51:56.385512    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:56.385587    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:56.396096    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:56.396118    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:56.396124    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:56.411541    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:56.411551    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:56.434568    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:56.434574    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:56.484966    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:56.484976    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:56.503305    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:56.503320    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:56.527968    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:56.527984    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:56.534835    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:56.534848    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:56.553724    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:56.553735    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:56.568849    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:56.568860    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:56.581724    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:56.581736    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:56.595106    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:56.595120    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:56.635352    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:56.635366    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:56.673257    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:56.673268    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:56.688638    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:56.688653    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:56.716240    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:56.716253    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:56.732618    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:56.732632    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:56.595847    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:51:56.595915    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:56.607646    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:51:56.607721    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:56.618588    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:51:56.618655    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:56.629600    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:51:56.629664    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:56.640339    9587 logs.go:276] 0 containers: []
	W0318 13:51:56.640353    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:56.640414    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:56.652360    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:51:56.652379    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:56.652384    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:56.688721    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:56.688730    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:56.693745    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:51:56.693759    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:51:56.705760    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:51:56.705771    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:51:56.724361    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:51:56.724373    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:51:56.739433    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:51:56.739450    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:51:56.752470    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:51:56.752481    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:51:56.764338    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:51:56.764349    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:51:56.780045    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:51:56.780059    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:56.791762    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:56.791776    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:56.826723    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:51:56.826732    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:51:56.841138    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:51:56.841149    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:51:56.853095    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:51:56.853110    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:51:56.873786    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:51:56.873800    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:51:56.885839    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:56.885850    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:59.412661    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:59.246868    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:04.414808    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:04.414894    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:04.426861    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:04.426936    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:04.437857    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:04.437923    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:04.449565    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:04.449638    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:04.460923    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:04.460991    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:04.471749    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:04.471819    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:04.483934    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:04.484003    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:04.495602    9587 logs.go:276] 0 containers: []
	W0318 13:52:04.495614    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.495676    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:04.507521    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:04.507536    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:04.507540    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:04.522912    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:04.522920    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:04.539101    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:04.539112    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:04.557816    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.557828    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.593541    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:04.593558    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:04.605491    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:04.605500    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:04.618363    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:04.618373    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:04.635244    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.635254    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.640192    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:04.640203    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:04.656253    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:04.656264    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:04.668586    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:04.668598    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:04.680344    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:04.680354    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:04.704346    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.704355    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:04.740348    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:04.740360    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:04.754846    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.754860    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:04.248793    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:04.248943    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:04.260246    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:52:04.260316    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:04.271020    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:52:04.271094    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:04.284315    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:52:04.284383    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:04.294744    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:52:04.294811    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:04.305222    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:52:04.305291    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:04.316227    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:52:04.316292    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:04.326561    9750 logs.go:276] 0 containers: []
	W0318 13:52:04.326572    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.326624    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:04.338093    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:52:04.338113    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:52:04.338118    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:52:04.349150    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.349160    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.353689    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.353696    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:04.392504    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:52:04.392518    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:52:04.406775    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:52:04.406786    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:52:04.418962    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:52:04.418977    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:52:04.436848    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:52:04.436861    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:52:04.451654    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:04.451666    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:04.475285    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.475299    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:04.488253    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:52:04.488264    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:52:04.506702    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:52:04.506716    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:52:04.519165    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.519177    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.563811    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:52:04.563822    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:52:04.603974    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:52:04.603990    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:52:04.619995    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:52:04.620006    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:52:04.636919    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:52:04.636928    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:52:07.155581    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:07.269450    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:12.158224    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:12.158453    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:12.177015    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:52:12.177102    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:12.192402    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:52:12.192473    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:12.203799    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:52:12.203862    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:12.214294    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:52:12.214362    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:12.224574    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:52:12.224641    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:12.235637    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:52:12.235701    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:12.247127    9750 logs.go:276] 0 containers: []
	W0318 13:52:12.247140    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:12.247195    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:12.257590    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:52:12.257609    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:12.257615    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:12.296676    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:52:12.296687    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:52:12.312219    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:52:12.312231    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:52:12.324543    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:52:12.324556    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:52:12.337323    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:52:12.337336    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:52:12.353397    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:52:12.353407    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:52:12.377953    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:12.377968    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:12.416007    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:52:12.416016    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:52:12.447284    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:52:12.447295    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:52:12.472361    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:12.472368    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:12.477095    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:52:12.477106    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:52:12.515766    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:12.515784    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:12.539632    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:52:12.539650    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:52:12.553976    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:52:12.553990    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:52:12.570256    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:52:12.570269    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:52:12.586427    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:52:12.586438    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:12.271709    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:12.271790    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:12.283335    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:12.283402    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:12.294291    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:12.294353    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:12.305622    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:12.305697    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:12.318012    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:12.318090    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:12.329427    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:12.329496    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:12.340611    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:12.340684    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:12.353154    9587 logs.go:276] 0 containers: []
	W0318 13:52:12.353166    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:12.353227    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:12.364412    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:12.364431    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:12.364437    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:12.380406    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:12.380416    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:12.415085    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:12.415099    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:12.428266    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:12.428278    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:12.443733    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:12.443745    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:12.459764    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:12.459775    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:12.471914    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:12.471924    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:12.509236    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:12.509249    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:12.524243    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:12.524256    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:12.536200    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:12.536212    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:12.555261    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:12.555270    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:12.571725    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:12.571734    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:12.590293    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:12.590303    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:12.604198    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:12.604210    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:12.628473    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:12.628483    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:15.134897    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:15.100996    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:20.137107    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:20.137175    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:20.148815    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:20.148887    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:20.160038    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:20.160110    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:20.172054    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:20.172125    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:20.183675    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:20.183751    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:20.194829    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:20.194895    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:20.206498    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:20.206560    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:20.217518    9587 logs.go:276] 0 containers: []
	W0318 13:52:20.217529    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:20.217586    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:20.229007    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:20.229024    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:20.229030    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:20.246633    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:20.246642    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:20.259053    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:20.259061    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:20.272185    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:20.272194    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:20.290661    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:20.290674    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:20.327142    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:20.327155    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:20.342171    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:20.342184    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:20.368646    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:20.368655    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:20.381399    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:20.381410    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:20.419347    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:20.419361    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:20.432477    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:20.432489    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:20.447532    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:20.447544    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:20.452425    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:20.452432    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:20.464356    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:20.464371    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:20.479602    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:20.479614    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:20.103505    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:20.103728    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:20.120255    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:52:20.120344    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:20.132876    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:52:20.132946    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:20.144021    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:52:20.144093    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:20.155542    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:52:20.155616    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:20.167727    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:52:20.167800    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:20.184325    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:52:20.184362    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:20.196933    9750 logs.go:276] 0 containers: []
	W0318 13:52:20.196943    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:20.196990    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:20.215760    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:52:20.215778    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:52:20.215784    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:52:20.230399    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:52:20.230409    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:52:20.244240    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:52:20.244254    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:52:20.258660    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:52:20.258673    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:20.270755    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:20.270767    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:20.307618    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:52:20.307630    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:52:20.322644    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:52:20.322655    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:52:20.337766    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:52:20.337777    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:52:20.354282    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:20.354296    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:20.392780    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:20.392798    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:20.397828    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:52:20.397837    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:52:20.410131    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:52:20.410143    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:52:20.422394    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:52:20.422407    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:52:20.437758    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:20.437774    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:20.462171    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:52:20.462186    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:52:20.501808    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:52:20.501824    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:52:23.021568    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:22.997949    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:28.022467    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:28.022779    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:28.046019    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:52:28.046127    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:28.062137    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:52:28.062217    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:28.075559    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:52:28.075632    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:28.087419    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:52:28.087490    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:28.098842    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:52:28.098913    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:28.110158    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:52:28.110225    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:28.121162    9750 logs.go:276] 0 containers: []
	W0318 13:52:28.121174    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:28.121235    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:28.133402    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:52:28.133471    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:52:28.133481    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:52:28.185428    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:52:28.185445    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:52:28.200489    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:52:28.200505    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:52:28.217235    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:52:28.217252    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:52:28.233097    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:52:28.233108    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:52:28.245263    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:28.245289    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:28.250278    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:52:28.250286    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:52:28.272374    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:52:28.272388    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:52:28.000339    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:28.000743    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:28.032832    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:28.032953    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:28.052821    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:28.052924    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:28.068134    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:28.068211    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:28.080867    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:28.080937    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:28.092496    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:28.092567    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:28.103876    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:28.103941    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:28.115270    9587 logs.go:276] 0 containers: []
	W0318 13:52:28.115282    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:28.115342    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:28.127139    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:28.127157    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:28.127163    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:28.131921    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:28.131932    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:28.169351    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:28.169363    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:28.181347    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:28.181359    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:28.193754    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:28.193767    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:28.206807    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:28.206819    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:28.225941    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:28.225953    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:28.261180    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:28.261203    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:28.277907    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:28.277921    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:28.293337    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:28.293351    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:28.305683    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:28.305696    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:28.318525    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:28.318536    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:28.331138    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:28.331148    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:28.347056    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:28.347070    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:28.372049    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:28.372068    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:30.885059    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:28.285387    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:52:28.285399    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:52:28.298258    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:52:28.298270    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:52:28.312867    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:52:28.312887    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:28.326299    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:28.326312    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:28.364348    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:28.364358    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:28.401777    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:52:28.401788    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:52:28.419085    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:52:28.419096    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:52:28.434166    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:28.434176    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:30.959622    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:35.887332    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:35.887524    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:35.906564    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:35.906661    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:35.921232    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:35.921304    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:35.933546    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:35.933619    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:35.944709    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:35.944776    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:35.955689    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:35.955755    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:35.967374    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:35.967440    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:35.978714    9587 logs.go:276] 0 containers: []
	W0318 13:52:35.978728    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:35.978787    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:35.990336    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:35.990353    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:35.990359    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:36.025633    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:36.025651    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:36.030911    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:36.030923    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:36.052172    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:36.052185    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:36.078507    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:36.078526    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:36.118904    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:36.118918    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:36.133992    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:36.134007    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:36.146645    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:36.146659    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:36.160094    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:36.160105    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:36.176509    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:36.176522    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:36.189036    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:36.189052    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:36.204362    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:36.204382    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:36.218057    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:36.218069    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:36.231310    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:36.231322    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:36.250813    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:36.250826    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:35.960175    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:35.960203    9750 kubeadm.go:591] duration metric: took 4m3.736581041s to restartPrimaryControlPlane
	W0318 13:52:35.960230    9750 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:52:35.960244    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 13:52:37.016284    9750 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.056034416s)
	I0318 13:52:37.016365    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:52:37.021221    9750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:52:37.023987    9750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:52:37.026719    9750 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:52:37.026725    9750 kubeadm.go:156] found existing configuration files:
	
	I0318 13:52:37.026749    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/admin.conf
	I0318 13:52:37.029370    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:52:37.029400    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:52:37.032250    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/kubelet.conf
	I0318 13:52:37.034736    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:52:37.034758    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:52:37.037758    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/controller-manager.conf
	I0318 13:52:37.040888    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:52:37.040911    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:52:37.043779    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/scheduler.conf
	I0318 13:52:37.046163    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:52:37.046184    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:52:37.049365    9750 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:52:37.067978    9750 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 13:52:37.068012    9750 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:52:37.116096    9750 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:52:37.116152    9750 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:52:37.116199    9750 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:52:37.164389    9750 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:52:37.173599    9750 out.go:204]   - Generating certificates and keys ...
	I0318 13:52:37.173635    9750 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:52:37.173666    9750 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:52:37.173709    9750 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:52:37.173753    9750 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:52:37.173791    9750 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:52:37.173819    9750 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:52:37.173860    9750 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:52:37.173892    9750 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:52:37.173933    9750 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:52:37.173975    9750 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:52:37.174007    9750 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:52:37.174043    9750 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:52:37.207222    9750 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:52:37.311938    9750 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:52:37.448189    9750 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:52:37.502096    9750 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:52:37.532414    9750 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:52:37.532834    9750 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:52:37.532868    9750 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:52:37.625907    9750 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:52:37.635072    9750 out.go:204]   - Booting up control plane ...
	I0318 13:52:37.635129    9750 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:52:37.635173    9750 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:52:37.635208    9750 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:52:37.635277    9750 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:52:37.635361    9750 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:52:38.772189    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:42.130769    9750 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501555 seconds
	I0318 13:52:42.130862    9750 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:52:42.136249    9750 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:52:42.657774    9750 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:52:42.658230    9750 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-813000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:52:43.162542    9750 kubeadm.go:309] [bootstrap-token] Using token: vvdmxl.j3rogto4uypt18n2
	I0318 13:52:43.169046    9750 out.go:204]   - Configuring RBAC rules ...
	I0318 13:52:43.169112    9750 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:52:43.169166    9750 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:52:43.177549    9750 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:52:43.178380    9750 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:52:43.179267    9750 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:52:43.180087    9750 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:52:43.183735    9750 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:52:43.380248    9750 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:52:43.566436    9750 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:52:43.566996    9750 kubeadm.go:309] 
	I0318 13:52:43.567071    9750 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:52:43.567082    9750 kubeadm.go:309] 
	I0318 13:52:43.567125    9750 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:52:43.567135    9750 kubeadm.go:309] 
	I0318 13:52:43.567150    9750 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:52:43.567193    9750 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:52:43.567220    9750 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:52:43.567224    9750 kubeadm.go:309] 
	I0318 13:52:43.567253    9750 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:52:43.567256    9750 kubeadm.go:309] 
	I0318 13:52:43.567286    9750 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:52:43.567290    9750 kubeadm.go:309] 
	I0318 13:52:43.567346    9750 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:52:43.567388    9750 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:52:43.567450    9750 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:52:43.567457    9750 kubeadm.go:309] 
	I0318 13:52:43.567498    9750 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:52:43.567540    9750 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:52:43.567543    9750 kubeadm.go:309] 
	I0318 13:52:43.567583    9750 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vvdmxl.j3rogto4uypt18n2 \
	I0318 13:52:43.567643    9750 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f245f57130bb8b4395382cd74200f36af238eb522c12e31804ffbb421429194 \
	I0318 13:52:43.567653    9750 kubeadm.go:309] 	--control-plane 
	I0318 13:52:43.567657    9750 kubeadm.go:309] 
	I0318 13:52:43.567726    9750 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:52:43.567732    9750 kubeadm.go:309] 
	I0318 13:52:43.567776    9750 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vvdmxl.j3rogto4uypt18n2 \
	I0318 13:52:43.567834    9750 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f245f57130bb8b4395382cd74200f36af238eb522c12e31804ffbb421429194 
	I0318 13:52:43.567893    9750 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:52:43.567903    9750 cni.go:84] Creating CNI manager for ""
	I0318 13:52:43.567911    9750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:52:43.572394    9750 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:52:43.580362    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:52:43.583248    9750 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:52:43.587924    9750 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:52:43.587967    9750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-813000 minikube.k8s.io/updated_at=2024_03_18T13_52_43_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=stopped-upgrade-813000 minikube.k8s.io/primary=true
	I0318 13:52:43.587968    9750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:52:43.625531    9750 kubeadm.go:1107] duration metric: took 37.598792ms to wait for elevateKubeSystemPrivileges
	I0318 13:52:43.629441    9750 ops.go:34] apiserver oom_adj: -16
	W0318 13:52:43.629464    9750 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:52:43.629469    9750 kubeadm.go:393] duration metric: took 4m11.419065459s to StartCluster
	I0318 13:52:43.629479    9750 settings.go:142] acquiring lock: {Name:mkb16a292265123b9734bd031ef06799b38c3f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:52:43.629561    9750 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:52:43.629966    9750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/kubeconfig: {Name:mk6a62990bf9328d54440f15380010f8199a9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:52:43.630166    9750 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:52:43.634412    9750 out.go:177] * Verifying Kubernetes components...
	I0318 13:52:43.630226    9750 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:52:43.630260    9750 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:52:43.642228    9750 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-813000"
	I0318 13:52:43.642234    9750 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-813000"
	I0318 13:52:43.642231    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:52:43.642245    9750 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-813000"
	W0318 13:52:43.642265    9750 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:52:43.642247    9750 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-813000"
	I0318 13:52:43.642281    9750 host.go:66] Checking if "stopped-upgrade-813000" exists ...
	I0318 13:52:43.643523    9750 kapi.go:59] client config for stopped-upgrade-813000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.key", CAFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105e86a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:52:43.643662    9750 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-813000"
	W0318 13:52:43.643667    9750 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:52:43.643673    9750 host.go:66] Checking if "stopped-upgrade-813000" exists ...
	I0318 13:52:43.648336    9750 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:52:43.774485    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:43.774610    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:43.785509    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:43.785563    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:43.800850    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:43.800920    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:43.812941    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:43.813011    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:43.823855    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:43.823923    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:43.834952    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:43.835024    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:43.847154    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:43.847236    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:43.858364    9587 logs.go:276] 0 containers: []
	W0318 13:52:43.858375    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:43.858431    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:43.869263    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:43.869279    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:43.869285    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:43.885662    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:43.885672    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:43.905684    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:43.905699    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:43.932766    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:43.932785    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:43.987751    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:43.987768    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:44.001040    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:44.001052    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:44.005827    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:44.005836    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:44.019732    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:44.019743    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:44.032931    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:44.032948    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:44.048923    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:44.048938    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:44.062635    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:44.062649    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:44.097260    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:44.097281    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:44.112983    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:44.112996    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:44.125914    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:44.125926    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:44.145485    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:44.145499    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:43.652401    9750 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:52:43.652409    9750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:52:43.652416    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:52:43.653063    9750 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:52:43.653069    9750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:52:43.653073    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:52:43.741828    9750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:52:43.748720    9750 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:52:43.748771    9750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:43.752560    9750 api_server.go:72] duration metric: took 122.382875ms to wait for apiserver process to appear ...
	I0318 13:52:43.752567    9750 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:52:43.752573    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:43.784544    9750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:52:43.784544    9750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:52:46.660447    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:48.754690    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:48.754727    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:51.660752    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:51.660946    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:51.678893    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:51.678991    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:51.692901    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:51.692978    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:51.704781    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:51.704856    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:51.716408    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:51.716478    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:51.727397    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:51.727460    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:51.738613    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:51.738677    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:51.749298    9587 logs.go:276] 0 containers: []
	W0318 13:52:51.749309    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:51.749365    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:51.760206    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:51.760223    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:51.760228    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:51.800458    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:51.800470    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:51.815163    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:51.815173    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:51.827878    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:51.827891    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:51.832386    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:51.832393    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:51.845395    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:51.845406    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:51.857832    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:51.857843    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:51.883296    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:51.883303    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:51.902398    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:51.902409    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:51.917570    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:51.917580    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:51.936236    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:51.936246    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:51.947685    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:51.947696    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:51.960904    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:51.960916    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:51.996020    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:51.996029    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:52.011572    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:52.011587    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:54.526379    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:53.755000    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:53.755022    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:59.528604    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:59.528705    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:59.542877    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:52:59.542951    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:59.554209    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:52:59.554282    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:59.565087    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:52:59.565158    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:59.575502    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:52:59.575564    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:59.586337    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:52:59.586410    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:59.597353    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:52:59.597427    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:59.611833    9587 logs.go:276] 0 containers: []
	W0318 13:52:59.611845    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:59.611906    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:59.622289    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:52:59.622305    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:52:59.622310    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:52:59.633925    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:52:59.633934    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:52:59.645324    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:59.645333    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:59.680919    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:52:59.680933    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:52:59.702735    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:52:59.702750    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:52:59.720580    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:59.720589    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:59.753558    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:59.753568    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:59.758521    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:52:59.758528    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:52:59.770772    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:52:59.770785    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:52:59.783193    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:52:59.783210    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:52:59.799644    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:52:59.799657    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:52:59.811385    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:59.811399    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:59.835046    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:52:59.835054    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:59.846282    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:52:59.846296    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:52:59.860909    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:52:59.860920    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:52:58.755343    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:58.755416    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:02.374349    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:03.755939    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:03.756002    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:07.376679    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:07.376806    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:07.389286    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:07.389354    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:07.401237    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:07.401306    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:07.411779    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:53:07.411855    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:07.422282    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:07.422348    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:07.432741    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:07.432812    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:07.442904    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:07.442974    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:07.452730    9587 logs.go:276] 0 containers: []
	W0318 13:53:07.452743    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:07.452803    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:07.463091    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:07.463107    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:07.463112    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:07.467474    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:07.467485    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:07.505044    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:07.505057    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:53:07.516881    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:53:07.516892    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:53:07.529874    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:07.529887    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:07.549806    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:07.549821    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:07.583632    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:07.583642    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:07.594854    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:07.594868    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:07.606584    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:07.606600    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:07.618192    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:07.618202    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:07.633253    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:07.633266    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:07.647477    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:07.647486    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:07.661104    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:07.661118    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:07.672514    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:07.672524    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:07.684106    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:07.684116    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:10.210647    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:08.756690    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:08.756716    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:13.757495    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:13.757550    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 13:53:14.177902    9750 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 13:53:14.183050    9750 out.go:177] * Enabled addons: storage-provisioner
	I0318 13:53:15.212948    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:15.213272    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:15.248273    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:15.248401    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:15.266581    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:15.266674    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:15.280617    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:53:15.280691    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:15.292261    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:15.292338    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:15.303385    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:15.303456    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:15.314447    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:15.314516    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:15.325009    9587 logs.go:276] 0 containers: []
	W0318 13:53:15.325020    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:15.325074    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:15.335920    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:15.335939    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:15.335944    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:15.370273    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:15.370284    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:15.384363    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:15.384373    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:15.399725    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:15.399735    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:15.411920    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:15.411930    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:15.423556    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:15.423567    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:15.427969    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:15.427978    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:15.445220    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:15.445233    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:15.457797    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:15.457811    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:15.492046    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:15.492058    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:15.503672    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:15.503682    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:53:15.515394    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:53:15.515406    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:53:15.527415    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:15.527426    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:15.542082    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:15.542095    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:15.554246    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:15.554257    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:14.187689    9750 addons.go:505] duration metric: took 30.557646333s for enable addons: enabled=[storage-provisioner]
	I0318 13:53:18.079002    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:18.758631    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:18.758677    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:23.081214    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:23.081351    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:23.105466    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:23.105540    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:23.116878    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:23.116945    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:23.127466    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:53:23.127530    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:23.138418    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:23.138479    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:23.149409    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:23.149471    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:23.160030    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:23.160089    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:23.170448    9587 logs.go:276] 0 containers: []
	W0318 13:53:23.170459    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:23.170514    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:23.181531    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:23.181556    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:23.181561    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:23.195812    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:23.195822    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:23.207352    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:23.207363    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:53:23.218695    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:53:23.218708    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:53:23.230880    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:23.230891    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:23.243418    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:23.243432    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:23.248042    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:23.248048    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:23.285895    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:23.285909    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:23.297571    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:23.297581    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:23.330502    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:23.330513    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:23.344982    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:23.344992    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:23.356164    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:23.356175    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:23.367282    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:23.367295    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:23.382373    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:23.382384    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:23.399936    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:23.399947    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:25.926904    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:23.760501    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:23.760588    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:30.929156    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:30.929402    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:30.954589    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:30.954709    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:30.970955    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:30.971041    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:30.984923    9587 logs.go:276] 4 containers: [a6cc97ce5c62 81b058de957e 16c60d7d510f 61927732b548]
	I0318 13:53:30.985004    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:30.996742    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:30.996813    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:31.009593    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:31.009663    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:31.020613    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:31.020679    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:31.031884    9587 logs.go:276] 0 containers: []
	W0318 13:53:31.031898    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:31.031952    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:31.042330    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:31.042346    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:31.042352    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:31.078936    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:31.078950    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:31.093850    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:31.093860    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:31.107136    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:31.107146    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	I0318 13:53:31.119350    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:31.119362    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:31.123673    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:31.123683    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:31.140174    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:31.140185    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:31.152271    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:31.152282    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:31.185729    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:31.185737    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:31.201646    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:31.201655    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:31.213267    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:31.213276    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:31.228447    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:31.228457    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:31.251936    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:31.251946    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:31.263092    9587 logs.go:123] Gathering logs for coredns [61927732b548] ...
	I0318 13:53:31.263101    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61927732b548"
	I0318 13:53:31.274861    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:31.274872    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:28.762384    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:28.762429    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:33.794320    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:33.764671    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:33.764706    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:38.796501    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:38.796614    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:38.812056    9587 logs.go:276] 1 containers: [ffb4a5516c2c]
	I0318 13:53:38.812128    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:38.824270    9587 logs.go:276] 1 containers: [9cf9ff66a899]
	I0318 13:53:38.824328    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:38.840769    9587 logs.go:276] 5 containers: [d95c9d62ad55 60cb95074fd8 a6cc97ce5c62 81b058de957e 16c60d7d510f]
	I0318 13:53:38.840843    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:38.851783    9587 logs.go:276] 1 containers: [cc2d5d3cf37b]
	I0318 13:53:38.851852    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:38.869039    9587 logs.go:276] 1 containers: [de0d63aa8a27]
	I0318 13:53:38.869104    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:38.884660    9587 logs.go:276] 1 containers: [ba4312cde4ec]
	I0318 13:53:38.884723    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:38.894587    9587 logs.go:276] 0 containers: []
	W0318 13:53:38.894597    9587 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:38.894643    9587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:38.905307    9587 logs.go:276] 1 containers: [7bc778b0d817]
	I0318 13:53:38.905322    9587 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:38.905328    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:38.909863    9587 logs.go:123] Gathering logs for kube-apiserver [ffb4a5516c2c] ...
	I0318 13:53:38.909870    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb4a5516c2c"
	I0318 13:53:38.923729    9587 logs.go:123] Gathering logs for coredns [a6cc97ce5c62] ...
	I0318 13:53:38.923742    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6cc97ce5c62"
	I0318 13:53:38.935909    9587 logs.go:123] Gathering logs for kube-controller-manager [ba4312cde4ec] ...
	I0318 13:53:38.935921    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4312cde4ec"
	I0318 13:53:38.953841    9587 logs.go:123] Gathering logs for storage-provisioner [7bc778b0d817] ...
	I0318 13:53:38.953859    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bc778b0d817"
	I0318 13:53:38.965285    9587 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:38.965296    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:38.999347    9587 logs.go:123] Gathering logs for etcd [9cf9ff66a899] ...
	I0318 13:53:38.999357    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf9ff66a899"
	I0318 13:53:39.016022    9587 logs.go:123] Gathering logs for coredns [60cb95074fd8] ...
	I0318 13:53:39.016033    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60cb95074fd8"
	I0318 13:53:39.027659    9587 logs.go:123] Gathering logs for coredns [81b058de957e] ...
	I0318 13:53:39.027675    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b058de957e"
	I0318 13:53:39.039449    9587 logs.go:123] Gathering logs for kube-scheduler [cc2d5d3cf37b] ...
	I0318 13:53:39.039464    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc2d5d3cf37b"
	I0318 13:53:39.054530    9587 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:39.054540    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:39.078599    9587 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:39.078621    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:39.115527    9587 logs.go:123] Gathering logs for container status ...
	I0318 13:53:39.115540    9587 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:39.127620    9587 logs.go:123] Gathering logs for coredns [d95c9d62ad55] ...
	I0318 13:53:39.127635    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d95c9d62ad55"
	I0318 13:53:39.139235    9587 logs.go:123] Gathering logs for coredns [16c60d7d510f] ...
	I0318 13:53:39.139247    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16c60d7d510f"
	W0318 13:53:39.151153    9587 logs.go:130] failed coredns [16c60d7d510f]: command: /bin/bash -c "docker logs --tail 400 16c60d7d510f" /bin/bash -c "docker logs --tail 400 16c60d7d510f": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 16c60d7d510f
	 output: 
	** stderr ** 
	Error: No such container: 16c60d7d510f
	
	** /stderr **
	I0318 13:53:39.151162    9587 logs.go:123] Gathering logs for kube-proxy [de0d63aa8a27] ...
	I0318 13:53:39.151168    9587 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de0d63aa8a27"
	I0318 13:53:38.766945    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:38.767005    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:41.664471    9587 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:46.666753    9587 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:46.670020    9587 out.go:177] 
	W0318 13:53:46.674083    9587 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 13:53:46.674096    9587 out.go:239] * 
	W0318 13:53:46.674929    9587 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:53:46.686040    9587 out.go:177] 
	I0318 13:53:43.769231    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:43.769369    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:43.793835    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:53:43.793917    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:43.808005    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:53:43.808076    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:43.818817    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:53:43.818891    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:43.829122    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:53:43.829192    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:43.839693    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:53:43.839766    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:43.850081    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:53:43.850139    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:43.860604    9750 logs.go:276] 0 containers: []
	W0318 13:53:43.860616    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:43.860671    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:43.870706    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:53:43.870724    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:43.870729    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:43.907533    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:43.907544    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:43.911408    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:43.911416    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:43.947721    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:53:43.947734    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:53:43.961792    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:53:43.961801    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:53:43.975516    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:53:43.975526    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:53:43.986779    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:53:43.986789    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:43.998067    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:53:43.998079    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:53:44.014827    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:53:44.014837    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:53:44.026347    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:53:44.026357    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:53:44.040291    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:53:44.040302    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:53:44.051826    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:53:44.051836    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:53:44.070842    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:44.070852    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:46.596118    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:51.598331    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:51.598583    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:51.623083    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:53:51.623173    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:51.636197    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:53:51.636267    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:51.647496    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:53:51.647554    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:51.658081    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:53:51.658153    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:51.668884    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:53:51.668949    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:51.678996    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:53:51.679052    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:51.688718    9750 logs.go:276] 0 containers: []
	W0318 13:53:51.688734    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:51.688796    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:51.699143    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:53:51.699160    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:51.699166    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:51.723644    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:53:51.723650    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:53:51.734926    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:53:51.734937    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:53:51.746243    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:51.746253    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:51.782733    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:53:51.782748    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:53:51.797232    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:53:51.797242    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:53:51.811621    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:53:51.811631    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:53:51.826160    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:53:51.826175    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:53:51.839848    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:53:51.839858    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:53:51.857832    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:51.857842    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:51.895791    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:51.895800    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:51.899990    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:53:51.899999    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:53:51.912128    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:53:51.912138    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:54.425683    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-03-18 20:44:35 UTC, ends at Mon 2024-03-18 20:54:02 UTC. --
	Mar 18 20:53:41 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:41Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 20:53:46 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 20:53:47 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:47Z" level=error msg="ContainerStats resp: {0x40000b9c00 linux}"
	Mar 18 20:53:47 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:47Z" level=error msg="ContainerStats resp: {0x4000358400 linux}"
	Mar 18 20:53:48 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:48Z" level=error msg="ContainerStats resp: {0x400076ea40 linux}"
	Mar 18 20:53:49 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:49Z" level=error msg="ContainerStats resp: {0x400007f4c0 linux}"
	Mar 18 20:53:49 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:49Z" level=error msg="ContainerStats resp: {0x400076fe00 linux}"
	Mar 18 20:53:49 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:49Z" level=error msg="ContainerStats resp: {0x400007e3c0 linux}"
	Mar 18 20:53:49 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:49Z" level=error msg="ContainerStats resp: {0x4000400b40 linux}"
	Mar 18 20:53:49 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:49Z" level=error msg="ContainerStats resp: {0x40008a2140 linux}"
	Mar 18 20:53:49 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:49Z" level=error msg="ContainerStats resp: {0x4000401180 linux}"
	Mar 18 20:53:49 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:49Z" level=error msg="ContainerStats resp: {0x40008a2900 linux}"
	Mar 18 20:53:51 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 20:53:56 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 20:53:59 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:59Z" level=error msg="ContainerStats resp: {0x40008bcfc0 linux}"
	Mar 18 20:53:59 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:53:59Z" level=error msg="ContainerStats resp: {0x40008bdf00 linux}"
	Mar 18 20:54:00 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:54:00Z" level=error msg="ContainerStats resp: {0x400074a0c0 linux}"
	Mar 18 20:54:01 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:54:01Z" level=error msg="ContainerStats resp: {0x400076fe80 linux}"
	Mar 18 20:54:01 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:54:01Z" level=error msg="ContainerStats resp: {0x40004f7a40 linux}"
	Mar 18 20:54:01 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:54:01Z" level=error msg="ContainerStats resp: {0x400074ba80 linux}"
	Mar 18 20:54:01 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:54:01Z" level=error msg="ContainerStats resp: {0x400074bec0 linux}"
	Mar 18 20:54:01 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:54:01Z" level=error msg="ContainerStats resp: {0x40000b9940 linux}"
	Mar 18 20:54:01 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:54:01Z" level=error msg="ContainerStats resp: {0x4000400500 linux}"
	Mar 18 20:54:01 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:54:01Z" level=error msg="ContainerStats resp: {0x40009647c0 linux}"
	Mar 18 20:54:01 running-upgrade-647000 cri-dockerd[3200]: time="2024-03-18T20:54:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	d95c9d62ad555       edaa71f2aee88       24 seconds ago      Running             coredns                   2                   4f1d5aee87cd9
	60cb95074fd85       edaa71f2aee88       24 seconds ago      Running             coredns                   2                   3790daac7d135
	a6cc97ce5c623       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3790daac7d135
	81b058de957e5       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   4f1d5aee87cd9
	7bc778b0d817b       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   b75258c5c804f
	de0d63aa8a277       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   722d3ae1b3c95
	ba4312cde4ec5       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   a6206a820a942
	9cf9ff66a8990       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   092d7ed3d8f46
	ffb4a5516c2c0       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   1429f54943c99
	cc2d5d3cf37b6       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   8b22d0e375c7b
	
	
	==> coredns [60cb95074fd8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6170622035953452455.411483156052383274. HINFO: read udp 10.244.0.2:51379->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6170622035953452455.411483156052383274. HINFO: read udp 10.244.0.2:40940->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6170622035953452455.411483156052383274. HINFO: read udp 10.244.0.2:37194->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6170622035953452455.411483156052383274. HINFO: read udp 10.244.0.2:43753->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6170622035953452455.411483156052383274. HINFO: read udp 10.244.0.2:37734->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6170622035953452455.411483156052383274. HINFO: read udp 10.244.0.2:46765->10.0.2.3:53: i/o timeout
	
	
	==> coredns [81b058de957e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:34728->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:57474->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:54541->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:54604->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:45892->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:57968->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:41563->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:42070->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:43742->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1341635174359365054.6416288381683600009. HINFO: read udp 10.244.0.3:54515->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a6cc97ce5c62] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7489795415250313234.1729480803889761602. HINFO: read udp 10.244.0.2:57471->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7489795415250313234.1729480803889761602. HINFO: read udp 10.244.0.2:54915->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7489795415250313234.1729480803889761602. HINFO: read udp 10.244.0.2:42629->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7489795415250313234.1729480803889761602. HINFO: read udp 10.244.0.2:51446->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7489795415250313234.1729480803889761602. HINFO: read udp 10.244.0.2:33068->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d95c9d62ad55] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5197845397369548903.8496490126441627462. HINFO: read udp 10.244.0.3:44908->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5197845397369548903.8496490126441627462. HINFO: read udp 10.244.0.3:40593->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5197845397369548903.8496490126441627462. HINFO: read udp 10.244.0.3:60809->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5197845397369548903.8496490126441627462. HINFO: read udp 10.244.0.3:54104->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5197845397369548903.8496490126441627462. HINFO: read udp 10.244.0.3:38959->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5197845397369548903.8496490126441627462. HINFO: read udp 10.244.0.3:45533->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-647000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-647000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=running-upgrade-647000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_49_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:49:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-647000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:54:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 20:49:46 +0000   Mon, 18 Mar 2024 20:49:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 20:49:46 +0000   Mon, 18 Mar 2024 20:49:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 20:49:46 +0000   Mon, 18 Mar 2024 20:49:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 20:49:46 +0000   Mon, 18 Mar 2024 20:49:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-647000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f6327c25ffe497e95e4cd62c0938743
	  System UUID:                0f6327c25ffe497e95e4cd62c0938743
	  Boot ID:                    2942e776-62bb-491b-a5e8-4964c4d03208
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-89p4v                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-tptwh                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-647000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-647000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-647000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-xjn78                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-647000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-647000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-647000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x2 over 4m23s)  kubelet          Node running-upgrade-647000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-647000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-647000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-647000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-647000 status is now: NodeReady
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-647000 event: Registered Node running-upgrade-647000 in Controller
	
	
	==> dmesg <==
	[  +1.851868] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.060731] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.075991] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +0.176006] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.055800] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.648882] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +0.240304] kauditd_printk_skb: 92 callbacks suppressed
	[Mar18 20:45] systemd-fstab-generator[1946]: Ignoring "noauto" for root device
	[  +2.841469] systemd-fstab-generator[2227]: Ignoring "noauto" for root device
	[  +0.146031] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[  +0.087820] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +0.093989] systemd-fstab-generator[2284]: Ignoring "noauto" for root device
	[ +17.612568] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.228310] systemd-fstab-generator[3157]: Ignoring "noauto" for root device
	[  +0.061526] systemd-fstab-generator[3168]: Ignoring "noauto" for root device
	[  +0.077495] systemd-fstab-generator[3179]: Ignoring "noauto" for root device
	[  +0.070699] systemd-fstab-generator[3193]: Ignoring "noauto" for root device
	[  +2.285449] systemd-fstab-generator[3344]: Ignoring "noauto" for root device
	[  +5.076755] systemd-fstab-generator[3734]: Ignoring "noauto" for root device
	[  +1.043366] systemd-fstab-generator[3862]: Ignoring "noauto" for root device
	[ +19.966916] kauditd_printk_skb: 68 callbacks suppressed
	[Mar18 20:49] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.304350] systemd-fstab-generator[11728]: Ignoring "noauto" for root device
	[  +5.627944] systemd-fstab-generator[12339]: Ignoring "noauto" for root device
	[  +0.478566] systemd-fstab-generator[12471]: Ignoring "noauto" for root device
	
	
	==> etcd [9cf9ff66a899] <==
	{"level":"info","ts":"2024-03-18T20:49:41.382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-18T20:49:41.382Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-18T20:49:41.384Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T20:49:41.386Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-18T20:49:41.386Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-18T20:49:41.386Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T20:49:41.387Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T20:49:42.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T20:49:42.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T20:49:42.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-18T20:49:42.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T20:49:42.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-18T20:49:42.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T20:49:42.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-18T20:49:42.056Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T20:49:42.057Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T20:49:42.057Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T20:49:42.057Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T20:49:42.057Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-647000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T20:49:42.057Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T20:49:42.057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T20:49:42.057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T20:49:42.057Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T20:49:42.057Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-18T20:49:42.058Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:54:03 up 9 min,  0 users,  load average: 0.32, 0.37, 0.19
	Linux running-upgrade-647000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ffb4a5516c2c] <==
	I0318 20:49:43.289858       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0318 20:49:43.301500       1 cache.go:39] Caches are synced for autoregister controller
	I0318 20:49:43.302033       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0318 20:49:43.302279       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0318 20:49:43.302790       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 20:49:43.303850       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 20:49:43.304377       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0318 20:49:44.041202       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0318 20:49:44.205476       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0318 20:49:44.207870       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0318 20:49:44.207942       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 20:49:44.325981       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 20:49:44.338404       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 20:49:44.365384       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0318 20:49:44.367400       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0318 20:49:44.367761       1 controller.go:611] quota admission added evaluator for: endpoints
	I0318 20:49:44.369901       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 20:49:45.336094       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0318 20:49:45.851670       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0318 20:49:45.858821       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0318 20:49:45.876645       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0318 20:49:45.907768       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 20:49:59.149629       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0318 20:49:59.748126       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0318 20:50:00.296319       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [ba4312cde4ec] <==
	I0318 20:49:59.098565       1 shared_informer.go:262] Caches are synced for HPA
	I0318 20:49:59.099083       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0318 20:49:59.108680       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0318 20:49:59.154075       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0318 20:49:59.195838       1 shared_informer.go:262] Caches are synced for taint
	I0318 20:49:59.195872       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0318 20:49:59.195890       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-647000. Assuming now as a timestamp.
	I0318 20:49:59.195906       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0318 20:49:59.195961       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0318 20:49:59.196034       1 event.go:294] "Event occurred" object="running-upgrade-647000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-647000 event: Registered Node running-upgrade-647000 in Controller"
	I0318 20:49:59.200477       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 20:49:59.200544       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 20:49:59.246667       1 shared_informer.go:262] Caches are synced for disruption
	I0318 20:49:59.246676       1 disruption.go:371] Sending events to api server.
	I0318 20:49:59.246668       1 shared_informer.go:262] Caches are synced for stateful set
	I0318 20:49:59.296779       1 shared_informer.go:262] Caches are synced for persistent volume
	I0318 20:49:59.296858       1 shared_informer.go:262] Caches are synced for attach detach
	I0318 20:49:59.297175       1 shared_informer.go:262] Caches are synced for expand
	I0318 20:49:59.346718       1 shared_informer.go:262] Caches are synced for PV protection
	I0318 20:49:59.718645       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 20:49:59.751303       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xjn78"
	I0318 20:49:59.796357       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 20:49:59.796437       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0318 20:50:00.099274       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-tptwh"
	I0318 20:50:00.101773       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-89p4v"
	
	
	==> kube-proxy [de0d63aa8a27] <==
	I0318 20:50:00.277405       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0318 20:50:00.277539       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0318 20:50:00.277622       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0318 20:50:00.294327       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0318 20:50:00.294345       1 server_others.go:206] "Using iptables Proxier"
	I0318 20:50:00.294360       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0318 20:50:00.294455       1 server.go:661] "Version info" version="v1.24.1"
	I0318 20:50:00.294459       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 20:50:00.294806       1 config.go:317] "Starting service config controller"
	I0318 20:50:00.294816       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0318 20:50:00.294829       1 config.go:226] "Starting endpoint slice config controller"
	I0318 20:50:00.294830       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0318 20:50:00.295110       1 config.go:444] "Starting node config controller"
	I0318 20:50:00.295439       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0318 20:50:00.395885       1 shared_informer.go:262] Caches are synced for service config
	I0318 20:50:00.395885       1 shared_informer.go:262] Caches are synced for node config
	I0318 20:50:00.395896       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cc2d5d3cf37b] <==
	W0318 20:49:43.262720       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 20:49:43.262724       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 20:49:43.262771       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 20:49:43.262778       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 20:49:43.262830       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 20:49:43.262838       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 20:49:43.262890       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 20:49:43.262897       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 20:49:43.262935       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 20:49:43.262942       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 20:49:43.262976       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 20:49:43.263003       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 20:49:43.263208       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 20:49:43.263215       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 20:49:43.263282       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 20:49:43.263289       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 20:49:44.101040       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 20:49:44.101080       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 20:49:44.163965       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 20:49:44.164055       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 20:49:44.182585       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 20:49:44.182604       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 20:49:44.271815       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 20:49:44.271929       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 20:49:44.560430       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-03-18 20:44:35 UTC, ends at Mon 2024-03-18 20:54:03 UTC. --
	Mar 18 20:49:47 running-upgrade-647000 kubelet[12345]: E0318 20:49:47.690805   12345 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-647000\" already exists" pod="kube-system/etcd-running-upgrade-647000"
	Mar 18 20:49:47 running-upgrade-647000 kubelet[12345]: E0318 20:49:47.889725   12345 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-647000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-647000"
	Mar 18 20:49:48 running-upgrade-647000 kubelet[12345]: I0318 20:49:48.086657   12345 request.go:601] Waited for 1.127737633s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Mar 18 20:49:48 running-upgrade-647000 kubelet[12345]: E0318 20:49:48.091350   12345 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-647000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-647000"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.145720   12345 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.146337   12345 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.202118   12345 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.346352   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4dk\" (UniqueName: \"kubernetes.io/projected/2cf6ce9f-c8fd-4a8c-a220-de069e02f229-kube-api-access-jx4dk\") pod \"storage-provisioner\" (UID: \"2cf6ce9f-c8fd-4a8c-a220-de069e02f229\") " pod="kube-system/storage-provisioner"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.346459   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2cf6ce9f-c8fd-4a8c-a220-de069e02f229-tmp\") pod \"storage-provisioner\" (UID: \"2cf6ce9f-c8fd-4a8c-a220-de069e02f229\") " pod="kube-system/storage-provisioner"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: E0318 20:49:59.450487   12345 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: E0318 20:49:59.450512   12345 projected.go:192] Error preparing data for projected volume kube-api-access-jx4dk for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: E0318 20:49:59.450565   12345 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2cf6ce9f-c8fd-4a8c-a220-de069e02f229-kube-api-access-jx4dk podName:2cf6ce9f-c8fd-4a8c-a220-de069e02f229 nodeName:}" failed. No retries permitted until 2024-03-18 20:49:59.950550302 +0000 UTC m=+14.111273277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jx4dk" (UniqueName: "kubernetes.io/projected/2cf6ce9f-c8fd-4a8c-a220-de069e02f229-kube-api-access-jx4dk") pod "storage-provisioner" (UID: "2cf6ce9f-c8fd-4a8c-a220-de069e02f229") : configmap "kube-root-ca.crt" not found
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.754121   12345 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.949697   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ea2919c-7c78-4a69-bbd9-4ff30ff66fa7-lib-modules\") pod \"kube-proxy-xjn78\" (UID: \"4ea2919c-7c78-4a69-bbd9-4ff30ff66fa7\") " pod="kube-system/kube-proxy-xjn78"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.949747   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ea2919c-7c78-4a69-bbd9-4ff30ff66fa7-kube-proxy\") pod \"kube-proxy-xjn78\" (UID: \"4ea2919c-7c78-4a69-bbd9-4ff30ff66fa7\") " pod="kube-system/kube-proxy-xjn78"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.949760   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frn2b\" (UniqueName: \"kubernetes.io/projected/4ea2919c-7c78-4a69-bbd9-4ff30ff66fa7-kube-api-access-frn2b\") pod \"kube-proxy-xjn78\" (UID: \"4ea2919c-7c78-4a69-bbd9-4ff30ff66fa7\") " pod="kube-system/kube-proxy-xjn78"
	Mar 18 20:49:59 running-upgrade-647000 kubelet[12345]: I0318 20:49:59.949772   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ea2919c-7c78-4a69-bbd9-4ff30ff66fa7-xtables-lock\") pod \"kube-proxy-xjn78\" (UID: \"4ea2919c-7c78-4a69-bbd9-4ff30ff66fa7\") " pod="kube-system/kube-proxy-xjn78"
	Mar 18 20:50:00 running-upgrade-647000 kubelet[12345]: I0318 20:50:00.101922   12345 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 20:50:00 running-upgrade-647000 kubelet[12345]: I0318 20:50:00.108720   12345 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 20:50:00 running-upgrade-647000 kubelet[12345]: I0318 20:50:00.251635   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/675cfe57-a593-4fdf-b5a1-abde3cf5e55f-config-volume\") pod \"coredns-6d4b75cb6d-tptwh\" (UID: \"675cfe57-a593-4fdf-b5a1-abde3cf5e55f\") " pod="kube-system/coredns-6d4b75cb6d-tptwh"
	Mar 18 20:50:00 running-upgrade-647000 kubelet[12345]: I0318 20:50:00.251666   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-782q7\" (UniqueName: \"kubernetes.io/projected/675cfe57-a593-4fdf-b5a1-abde3cf5e55f-kube-api-access-782q7\") pod \"coredns-6d4b75cb6d-tptwh\" (UID: \"675cfe57-a593-4fdf-b5a1-abde3cf5e55f\") " pod="kube-system/coredns-6d4b75cb6d-tptwh"
	Mar 18 20:50:00 running-upgrade-647000 kubelet[12345]: I0318 20:50:00.251678   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fdcb477-549c-4ef4-b561-d1cd4530023c-config-volume\") pod \"coredns-6d4b75cb6d-89p4v\" (UID: \"9fdcb477-549c-4ef4-b561-d1cd4530023c\") " pod="kube-system/coredns-6d4b75cb6d-89p4v"
	Mar 18 20:50:00 running-upgrade-647000 kubelet[12345]: I0318 20:50:00.251689   12345 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5cb6\" (UniqueName: \"kubernetes.io/projected/9fdcb477-549c-4ef4-b561-d1cd4530023c-kube-api-access-f5cb6\") pod \"coredns-6d4b75cb6d-89p4v\" (UID: \"9fdcb477-549c-4ef4-b561-d1cd4530023c\") " pod="kube-system/coredns-6d4b75cb6d-89p4v"
	Mar 18 20:53:38 running-upgrade-647000 kubelet[12345]: I0318 20:53:38.207766   12345 scope.go:110] "RemoveContainer" containerID="61927732b548e32d838c07b89aaa2bf60422a342720880883403a7fc60de7a2a"
	Mar 18 20:53:39 running-upgrade-647000 kubelet[12345]: I0318 20:53:39.215616   12345 scope.go:110] "RemoveContainer" containerID="16c60d7d510f12ce4d0c932af50e61b92c5226e65876301e37561f0f4ba42e5c"
	
	
	==> storage-provisioner [7bc778b0d817] <==
	I0318 20:50:00.324784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 20:50:00.328570       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 20:50:00.328600       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 20:50:00.331843       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 20:50:00.331983       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-647000_0d840ea9-f532-4859-99b0-c2809423d9f0!
	I0318 20:50:00.332320       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b10fc56f-169b-42a5-9ede-34baeda5c055", APIVersion:"v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-647000_0d840ea9-f532-4859-99b0-c2809423d9f0 became leader
	I0318 20:50:00.433004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-647000_0d840ea9-f532-4859-99b0-c2809423d9f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-647000 -n running-upgrade-647000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-647000 -n running-upgrade-647000: exit status 2 (15.647543375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-647000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-647000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-647000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-647000: (2.220415375s)
--- FAIL: TestRunningBinaryUpgrade (639.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-635000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-635000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.834077167s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-635000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-635000" primary control-plane node in "kubernetes-upgrade-635000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-635000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:46:41.343235    9663 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:46:41.343370    9663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:46:41.343374    9663 out.go:304] Setting ErrFile to fd 2...
	I0318 13:46:41.343377    9663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:46:41.343512    9663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:46:41.344673    9663 out.go:298] Setting JSON to false
	I0318 13:46:41.362098    9663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6373,"bootTime":1710788428,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:46:41.362161    9663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:46:41.367501    9663 out.go:177] * [kubernetes-upgrade-635000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:46:41.380363    9663 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:46:41.375425    9663 notify.go:220] Checking for updates...
	I0318 13:46:41.388403    9663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:46:41.396350    9663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:46:41.403375    9663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:46:41.411374    9663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:46:41.419384    9663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:46:41.422686    9663 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:46:41.422759    9663 config.go:182] Loaded profile config "running-upgrade-647000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:46:41.422808    9663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:46:41.426411    9663 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:46:41.432352    9663 start.go:297] selected driver: qemu2
	I0318 13:46:41.432357    9663 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:46:41.432363    9663 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:46:41.434678    9663 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:46:41.438418    9663 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:46:41.442419    9663 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 13:46:41.442450    9663 cni.go:84] Creating CNI manager for ""
	I0318 13:46:41.442458    9663 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 13:46:41.442486    9663 start.go:340] cluster config:
	{Name:kubernetes-upgrade-635000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:46:41.446952    9663 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:46:41.454366    9663 out.go:177] * Starting "kubernetes-upgrade-635000" primary control-plane node in "kubernetes-upgrade-635000" cluster
	I0318 13:46:41.458382    9663 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 13:46:41.458402    9663 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 13:46:41.458409    9663 cache.go:56] Caching tarball of preloaded images
	I0318 13:46:41.458459    9663 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:46:41.458464    9663 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 13:46:41.458540    9663 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/kubernetes-upgrade-635000/config.json ...
	I0318 13:46:41.458553    9663 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/kubernetes-upgrade-635000/config.json: {Name:mke605104f0c3fd3bd37ca00e4646f07d5024e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:46:41.458819    9663 start.go:360] acquireMachinesLock for kubernetes-upgrade-635000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:46:41.458857    9663 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "kubernetes-upgrade-635000"
	I0318 13:46:41.458871    9663 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:46:41.458896    9663 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:46:41.467387    9663 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:46:41.492024    9663 start.go:159] libmachine.API.Create for "kubernetes-upgrade-635000" (driver="qemu2")
	I0318 13:46:41.492072    9663 client.go:168] LocalClient.Create starting
	I0318 13:46:41.492155    9663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:46:41.492193    9663 main.go:141] libmachine: Decoding PEM data...
	I0318 13:46:41.492204    9663 main.go:141] libmachine: Parsing certificate...
	I0318 13:46:41.492253    9663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:46:41.492274    9663 main.go:141] libmachine: Decoding PEM data...
	I0318 13:46:41.492280    9663 main.go:141] libmachine: Parsing certificate...
	I0318 13:46:41.492634    9663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:46:41.680509    9663 main.go:141] libmachine: Creating SSH key...
	I0318 13:46:41.754253    9663 main.go:141] libmachine: Creating Disk image...
	I0318 13:46:41.754261    9663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:46:41.754470    9663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2
	I0318 13:46:41.769146    9663 main.go:141] libmachine: STDOUT: 
	I0318 13:46:41.769172    9663 main.go:141] libmachine: STDERR: 
	I0318 13:46:41.769248    9663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2 +20000M
	I0318 13:46:41.780195    9663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:46:41.780214    9663 main.go:141] libmachine: STDERR: 
	I0318 13:46:41.780233    9663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2
	I0318 13:46:41.780239    9663 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:46:41.780264    9663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:9a:b8:c0:78:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2
	I0318 13:46:41.782029    9663 main.go:141] libmachine: STDOUT: 
	I0318 13:46:41.782046    9663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:46:41.782065    9663 client.go:171] duration metric: took 289.983959ms to LocalClient.Create
	I0318 13:46:43.783191    9663 start.go:128] duration metric: took 2.324292208s to createHost
	I0318 13:46:43.783236    9663 start.go:83] releasing machines lock for "kubernetes-upgrade-635000", held for 2.324384958s
	W0318 13:46:43.783269    9663 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:46:43.796009    9663 out.go:177] * Deleting "kubernetes-upgrade-635000" in qemu2 ...
	W0318 13:46:43.812952    9663 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:46:43.812962    9663 start.go:728] Will try again in 5 seconds ...
	I0318 13:46:48.815073    9663 start.go:360] acquireMachinesLock for kubernetes-upgrade-635000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:46:48.815331    9663 start.go:364] duration metric: took 187.209µs to acquireMachinesLock for "kubernetes-upgrade-635000"
	I0318 13:46:48.815382    9663 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:46:48.815453    9663 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:46:48.824808    9663 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:46:48.846563    9663 start.go:159] libmachine.API.Create for "kubernetes-upgrade-635000" (driver="qemu2")
	I0318 13:46:48.846607    9663 client.go:168] LocalClient.Create starting
	I0318 13:46:48.846680    9663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:46:48.846720    9663 main.go:141] libmachine: Decoding PEM data...
	I0318 13:46:48.846736    9663 main.go:141] libmachine: Parsing certificate...
	I0318 13:46:48.846785    9663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:46:48.846809    9663 main.go:141] libmachine: Decoding PEM data...
	I0318 13:46:48.846816    9663 main.go:141] libmachine: Parsing certificate...
	I0318 13:46:48.847128    9663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:46:48.986894    9663 main.go:141] libmachine: Creating SSH key...
	I0318 13:46:49.072207    9663 main.go:141] libmachine: Creating Disk image...
	I0318 13:46:49.072214    9663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:46:49.072421    9663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2
	I0318 13:46:49.084997    9663 main.go:141] libmachine: STDOUT: 
	I0318 13:46:49.085019    9663 main.go:141] libmachine: STDERR: 
	I0318 13:46:49.085094    9663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2 +20000M
	I0318 13:46:49.095898    9663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:46:49.095917    9663 main.go:141] libmachine: STDERR: 
	I0318 13:46:49.095930    9663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2
	I0318 13:46:49.095935    9663 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:46:49.095974    9663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ef:fd:5f:d7:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2
	I0318 13:46:49.097861    9663 main.go:141] libmachine: STDOUT: 
	I0318 13:46:49.097879    9663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:46:49.097891    9663 client.go:171] duration metric: took 251.281333ms to LocalClient.Create
	I0318 13:46:51.100101    9663 start.go:128] duration metric: took 2.284623375s to createHost
	I0318 13:46:51.100268    9663 start.go:83] releasing machines lock for "kubernetes-upgrade-635000", held for 2.284934041s
	W0318 13:46:51.100640    9663 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-635000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-635000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:46:51.110317    9663 out.go:177] 
	W0318 13:46:51.117578    9663 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:46:51.117637    9663 out.go:239] * 
	* 
	W0318 13:46:51.119783    9663 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:46:51.133278    9663 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-635000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-635000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-635000: (3.428364125s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-635000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-635000 status --format={{.Host}}: exit status 7 (58.216333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-635000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-635000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.175125458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-635000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-635000" primary control-plane node in "kubernetes-upgrade-635000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-635000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-635000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:46:54.666844    9701 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:46:54.666969    9701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:46:54.666972    9701 out.go:304] Setting ErrFile to fd 2...
	I0318 13:46:54.666975    9701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:46:54.667099    9701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:46:54.668121    9701 out.go:298] Setting JSON to false
	I0318 13:46:54.685857    9701 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6386,"bootTime":1710788428,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:46:54.685923    9701 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:46:54.690782    9701 out.go:177] * [kubernetes-upgrade-635000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:46:54.697766    9701 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:46:54.701720    9701 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:46:54.697881    9701 notify.go:220] Checking for updates...
	I0318 13:46:54.708651    9701 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:46:54.712710    9701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:46:54.715757    9701 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:46:54.718720    9701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:46:54.722046    9701 config.go:182] Loaded profile config "kubernetes-upgrade-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 13:46:54.722305    9701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:46:54.725736    9701 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:46:54.732726    9701 start.go:297] selected driver: qemu2
	I0318 13:46:54.732731    9701 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:46:54.732779    9701 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:46:54.735442    9701 cni.go:84] Creating CNI manager for ""
	I0318 13:46:54.735459    9701 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:46:54.735476    9701 start.go:340] cluster config:
	{Name:kubernetes-upgrade-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-635000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:46:54.739561    9701 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:46:54.747714    9701 out.go:177] * Starting "kubernetes-upgrade-635000" primary control-plane node in "kubernetes-upgrade-635000" cluster
	I0318 13:46:54.751736    9701 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 13:46:54.751758    9701 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 13:46:54.751767    9701 cache.go:56] Caching tarball of preloaded images
	I0318 13:46:54.751829    9701 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:46:54.751835    9701 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 13:46:54.751897    9701 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/kubernetes-upgrade-635000/config.json ...
	I0318 13:46:54.752235    9701 start.go:360] acquireMachinesLock for kubernetes-upgrade-635000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:46:54.752260    9701 start.go:364] duration metric: took 19.125µs to acquireMachinesLock for "kubernetes-upgrade-635000"
	I0318 13:46:54.752269    9701 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:46:54.752272    9701 fix.go:54] fixHost starting: 
	I0318 13:46:54.752377    9701 fix.go:112] recreateIfNeeded on kubernetes-upgrade-635000: state=Stopped err=<nil>
	W0318 13:46:54.752388    9701 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:46:54.760693    9701 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-635000" ...
	I0318 13:46:54.764597    9701 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ef:fd:5f:d7:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2
	I0318 13:46:54.766340    9701 main.go:141] libmachine: STDOUT: 
	I0318 13:46:54.766357    9701 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:46:54.766383    9701 fix.go:56] duration metric: took 14.109583ms for fixHost
	I0318 13:46:54.766386    9701 start.go:83] releasing machines lock for "kubernetes-upgrade-635000", held for 14.122541ms
	W0318 13:46:54.766393    9701 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:46:54.766413    9701 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:46:54.766416    9701 start.go:728] Will try again in 5 seconds ...
	I0318 13:46:59.768470    9701 start.go:360] acquireMachinesLock for kubernetes-upgrade-635000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:46:59.768548    9701 start.go:364] duration metric: took 62.625µs to acquireMachinesLock for "kubernetes-upgrade-635000"
	I0318 13:46:59.768577    9701 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:46:59.768580    9701 fix.go:54] fixHost starting: 
	I0318 13:46:59.768722    9701 fix.go:112] recreateIfNeeded on kubernetes-upgrade-635000: state=Stopped err=<nil>
	W0318 13:46:59.768726    9701 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:46:59.775833    9701 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-635000" ...
	I0318 13:46:59.778817    9701 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ef:fd:5f:d7:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubernetes-upgrade-635000/disk.qcow2
	I0318 13:46:59.781072    9701 main.go:141] libmachine: STDOUT: 
	I0318 13:46:59.781090    9701 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:46:59.781111    9701 fix.go:56] duration metric: took 12.530875ms for fixHost
	I0318 13:46:59.781116    9701 start.go:83] releasing machines lock for "kubernetes-upgrade-635000", held for 12.561917ms
	W0318 13:46:59.781161    9701 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-635000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-635000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:46:59.788847    9701 out.go:177] 
	W0318 13:46:59.791840    9701 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:46:59.791845    9701 out.go:239] * 
	* 
	W0318 13:46:59.792273    9701 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:46:59.803791    9701 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-635000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-635000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-635000 version --output=json: exit status 1 (28.614916ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-635000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-18 13:46:59.840021 -0700 PDT m=+1101.458718418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-635000 -n kubernetes-upgrade-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-635000 -n kubernetes-upgrade-635000: exit status 7 (31.882958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-635000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-635000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-635000
--- FAIL: TestKubernetesUpgrade (18.66s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.48s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18421
- KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2615391365/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.48s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.44s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18421
- KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1967815230/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (579.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.332148805 start -p stopped-upgrade-813000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.332148805 start -p stopped-upgrade-813000 --memory=2200 --vm-driver=qemu2 : (45.546215s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.332148805 -p stopped-upgrade-813000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.332148805 -p stopped-upgrade-813000 stop: (12.1186515s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-813000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-813000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.691348375s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-813000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-813000" primary control-plane node in "stopped-upgrade-813000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-813000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:48:03.278461    9750 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:48:03.278592    9750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:48:03.278596    9750 out.go:304] Setting ErrFile to fd 2...
	I0318 13:48:03.278599    9750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:48:03.278742    9750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:48:03.279966    9750 out.go:298] Setting JSON to false
	I0318 13:48:03.299398    9750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6455,"bootTime":1710788428,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:48:03.299482    9750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:48:03.304310    9750 out.go:177] * [stopped-upgrade-813000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:48:03.312256    9750 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:48:03.316264    9750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:48:03.312328    9750 notify.go:220] Checking for updates...
	I0318 13:48:03.322227    9750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:48:03.326288    9750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:48:03.329261    9750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:48:03.337301    9750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:48:03.340594    9750 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:48:03.344162    9750 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 13:48:03.347297    9750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:48:03.351248    9750 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:48:03.358241    9750 start.go:297] selected driver: qemu2
	I0318 13:48:03.358247    9750 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 13:48:03.358295    9750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:48:03.361081    9750 cni.go:84] Creating CNI manager for ""
	I0318 13:48:03.361101    9750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:48:03.361130    9750 start.go:340] cluster config:
	{Name:stopped-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 13:48:03.361192    9750 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:48:03.369202    9750 out.go:177] * Starting "stopped-upgrade-813000" primary control-plane node in "stopped-upgrade-813000" cluster
	I0318 13:48:03.373285    9750 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 13:48:03.373319    9750 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 13:48:03.373329    9750 cache.go:56] Caching tarball of preloaded images
	I0318 13:48:03.373408    9750 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:48:03.373415    9750 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 13:48:03.373472    9750 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/config.json ...
	I0318 13:48:03.373807    9750 start.go:360] acquireMachinesLock for stopped-upgrade-813000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:48:03.373842    9750 start.go:364] duration metric: took 26.75µs to acquireMachinesLock for "stopped-upgrade-813000"
	I0318 13:48:03.373853    9750 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:48:03.373857    9750 fix.go:54] fixHost starting: 
	I0318 13:48:03.373963    9750 fix.go:112] recreateIfNeeded on stopped-upgrade-813000: state=Stopped err=<nil>
	W0318 13:48:03.373971    9750 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:48:03.377265    9750 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-813000" ...
	I0318 13:48:03.385342    9750 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51326-:22,hostfwd=tcp::51327-:2376,hostname=stopped-upgrade-813000 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/disk.qcow2
	I0318 13:48:03.434980    9750 main.go:141] libmachine: STDOUT: 
	I0318 13:48:03.435023    9750 main.go:141] libmachine: STDERR: 
	I0318 13:48:03.435029    9750 main.go:141] libmachine: Waiting for VM to start (ssh -p 51326 docker@127.0.0.1)...
	I0318 13:48:22.397216    9750 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/config.json ...
	I0318 13:48:22.397509    9750 machine.go:94] provisionDockerMachine start ...
	I0318 13:48:22.397562    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.397716    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.397723    9750 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:48:22.459680    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:48:22.459698    9750 buildroot.go:166] provisioning hostname "stopped-upgrade-813000"
	I0318 13:48:22.459751    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.459871    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.459876    9750 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-813000 && echo "stopped-upgrade-813000" | sudo tee /etc/hostname
	I0318 13:48:22.523290    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-813000
	
	I0318 13:48:22.523342    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.523462    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.523471    9750 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-813000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-813000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-813000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:48:22.586315    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:48:22.586328    9750 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18421-6777/.minikube CaCertPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18421-6777/.minikube}
	I0318 13:48:22.586336    9750 buildroot.go:174] setting up certificates
	I0318 13:48:22.586341    9750 provision.go:84] configureAuth start
	I0318 13:48:22.586349    9750 provision.go:143] copyHostCerts
	I0318 13:48:22.586424    9750 exec_runner.go:144] found /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.pem, removing ...
	I0318 13:48:22.586429    9750 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.pem
	I0318 13:48:22.586535    9750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.pem (1078 bytes)
	I0318 13:48:22.586749    9750 exec_runner.go:144] found /Users/jenkins/minikube-integration/18421-6777/.minikube/cert.pem, removing ...
	I0318 13:48:22.586753    9750 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18421-6777/.minikube/cert.pem
	I0318 13:48:22.586801    9750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18421-6777/.minikube/cert.pem (1123 bytes)
	I0318 13:48:22.586914    9750 exec_runner.go:144] found /Users/jenkins/minikube-integration/18421-6777/.minikube/key.pem, removing ...
	I0318 13:48:22.586918    9750 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18421-6777/.minikube/key.pem
	I0318 13:48:22.586962    9750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18421-6777/.minikube/key.pem (1679 bytes)
	I0318 13:48:22.587054    9750 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-813000 san=[127.0.0.1 localhost minikube stopped-upgrade-813000]
	I0318 13:48:22.677450    9750 provision.go:177] copyRemoteCerts
	I0318 13:48:22.677482    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:48:22.677490    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:48:22.709545    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:48:22.716288    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:48:22.722874    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:48:22.730001    9750 provision.go:87] duration metric: took 143.651459ms to configureAuth
	I0318 13:48:22.730010    9750 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:48:22.730117    9750 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:48:22.730152    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.730245    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.730249    9750 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 13:48:22.788162    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 13:48:22.788170    9750 buildroot.go:70] root file system type: tmpfs
	I0318 13:48:22.788221    9750 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 13:48:22.788260    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.788358    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.788389    9750 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 13:48:22.851062    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 13:48:22.851114    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:22.851217    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:22.851228    9750 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 13:48:23.222358    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 13:48:23.222373    9750 machine.go:97] duration metric: took 824.8625ms to provisionDockerMachine
	I0318 13:48:23.222381    9750 start.go:293] postStartSetup for "stopped-upgrade-813000" (driver="qemu2")
	I0318 13:48:23.222388    9750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:48:23.222456    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:48:23.222468    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:48:23.254873    9750 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:48:23.256307    9750 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 13:48:23.256319    9750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18421-6777/.minikube/addons for local assets ...
	I0318 13:48:23.256395    9750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18421-6777/.minikube/files for local assets ...
	I0318 13:48:23.256510    9750 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem -> 72362.pem in /etc/ssl/certs
	I0318 13:48:23.256633    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:48:23.259194    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem --> /etc/ssl/certs/72362.pem (1708 bytes)
	I0318 13:48:23.266139    9750 start.go:296] duration metric: took 43.751292ms for postStartSetup
	I0318 13:48:23.266154    9750 fix.go:56] duration metric: took 19.892398917s for fixHost
	I0318 13:48:23.266210    9750 main.go:141] libmachine: Using SSH client type: native
	I0318 13:48:23.266311    9750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b95bf0] 0x104b98450 <nil>  [] 0s} localhost 51326 <nil> <nil>}
	I0318 13:48:23.266316    9750 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 13:48:23.323703    9750 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710794903.764309837
	
	I0318 13:48:23.323712    9750 fix.go:216] guest clock: 1710794903.764309837
	I0318 13:48:23.323716    9750 fix.go:229] Guest: 2024-03-18 13:48:23.764309837 -0700 PDT Remote: 2024-03-18 13:48:23.266168 -0700 PDT m=+20.024459085 (delta=498.141837ms)
	I0318 13:48:23.323735    9750 fix.go:200] guest clock delta is within tolerance: 498.141837ms
	I0318 13:48:23.323741    9750 start.go:83] releasing machines lock for "stopped-upgrade-813000", held for 19.949992708s
	I0318 13:48:23.323810    9750 ssh_runner.go:195] Run: cat /version.json
	I0318 13:48:23.323819    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:48:23.323810    9750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:48:23.323868    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	W0318 13:48:23.324383    9750 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51326: connect: connection refused
	I0318 13:48:23.324408    9750 retry.go:31] will retry after 342.017178ms: dial tcp [::1]:51326: connect: connection refused
	W0318 13:48:23.351520    9750 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 13:48:23.351568    9750 ssh_runner.go:195] Run: systemctl --version
	I0318 13:48:23.353339    9750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:48:23.354888    9750 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:48:23.354916    9750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 13:48:23.358021    9750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 13:48:23.362633    9750 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:48:23.362642    9750 start.go:494] detecting cgroup driver to use...
	I0318 13:48:23.362721    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:48:23.369689    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 13:48:23.373124    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 13:48:23.375821    9750 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 13:48:23.375853    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 13:48:23.378758    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:48:23.382262    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 13:48:23.385708    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:48:23.388657    9750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:48:23.391424    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 13:48:23.394647    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0318 13:48:23.398069    9750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0318 13:48:23.401502    9750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:48:23.404164    9750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:48:23.406911    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:23.468783    9750 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 13:48:23.474384    9750 start.go:494] detecting cgroup driver to use...
	I0318 13:48:23.474460    9750 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 13:48:23.483614    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:48:23.488250    9750 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:48:23.497324    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:48:23.501945    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:48:23.506398    9750 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 13:48:23.561953    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:48:23.567235    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:48:23.572619    9750 ssh_runner.go:195] Run: which cri-dockerd
	I0318 13:48:23.573857    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 13:48:23.576425    9750 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 13:48:23.581214    9750 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 13:48:23.658767    9750 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 13:48:23.743875    9750 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 13:48:23.744373    9750 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 13:48:23.750122    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:23.826910    9750 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:48:24.968784    9750 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.141859958s)
	I0318 13:48:24.968846    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 13:48:24.973800    9750 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 13:48:24.979896    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:48:24.984316    9750 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 13:48:25.066772    9750 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 13:48:25.146685    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:25.224346    9750 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 13:48:25.229819    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:48:25.234994    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:25.296231    9750 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 13:48:25.341799    9750 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 13:48:25.341885    9750 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 13:48:25.344821    9750 start.go:562] Will wait 60s for crictl version
	I0318 13:48:25.344874    9750 ssh_runner.go:195] Run: which crictl
	I0318 13:48:25.346112    9750 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:48:25.361262    9750 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 13:48:25.361331    9750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:48:25.379956    9750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:48:25.402207    9750 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 13:48:25.402321    9750 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 13:48:25.403619    9750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:48:25.407216    9750 kubeadm.go:877] updating cluster {Name:stopped-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 13:48:25.407267    9750 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 13:48:25.407306    9750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:48:25.417661    9750 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 13:48:25.417669    9750 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 13:48:25.417711    9750 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 13:48:25.420947    9750 ssh_runner.go:195] Run: which lz4
	I0318 13:48:25.422157    9750 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 13:48:25.423286    9750 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:48:25.423295    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 13:48:26.122521    9750 docker.go:649] duration metric: took 700.398875ms to copy over tarball
	I0318 13:48:26.122583    9750 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:48:27.312202    9750 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.18960475s)
	I0318 13:48:27.312217    9750 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:48:27.329024    9750 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 13:48:27.332253    9750 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 13:48:27.337924    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:27.422330    9750 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:48:28.939162    9750 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.516821s)
	I0318 13:48:28.939265    9750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:48:28.953997    9750 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 13:48:28.954007    9750 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 13:48:28.954012    9750 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:48:28.961543    9750 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:28.961554    9750 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 13:48:28.961793    9750 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:28.961905    9750 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:48:28.961961    9750 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:28.962274    9750 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:28.962523    9750 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:28.962722    9750 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:28.971033    9750 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:28.971103    9750 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:28.971841    9750 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:28.971881    9750 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:48:28.971983    9750 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:28.971963    9750 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:28.971992    9750 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:28.972028    9750 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 13:48:30.921805    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 13:48:30.962632    9750 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 13:48:30.962684    9750 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 13:48:30.962777    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 13:48:30.984481    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:30.987224    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 13:48:30.987346    9750 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0318 13:48:31.001203    9750 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 13:48:31.001233    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 13:48:31.001317    9750 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 13:48:31.001343    9750 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:31.001384    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 13:48:31.014028    9750 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 13:48:31.014039    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 13:48:31.015809    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 13:48:31.018434    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:31.045854    9750 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0318 13:48:31.045886    9750 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 13:48:31.045904    9750 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:31.045958    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 13:48:31.047681    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0318 13:48:31.048119    9750 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 13:48:31.048203    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:31.054814    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:31.056032    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:31.059158    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 13:48:31.059268    9750 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0318 13:48:31.072057    9750 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 13:48:31.072079    9750 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:48:31.072060    9750 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 13:48:31.072128    9750 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:31.072131    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 13:48:31.072155    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:48:31.077328    9750 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 13:48:31.077347    9750 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:31.077400    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 13:48:31.079606    9750 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0318 13:48:31.079629    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0318 13:48:31.079658    9750 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 13:48:31.079671    9750 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:31.079701    9750 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 13:48:31.111637    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 13:48:31.111638    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 13:48:31.111696    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 13:48:31.111751    9750 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0318 13:48:31.116583    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 13:48:31.118432    9750 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 13:48:31.118461    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 13:48:31.185141    9750 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 13:48:31.185155    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 13:48:31.325908    9750 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 13:48:31.352797    9750 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0318 13:48:31.352810    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0318 13:48:31.404965    9750 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 13:48:31.405091    9750 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:31.493440    9750 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0318 13:48:31.493460    9750 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 13:48:31.493478    9750 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:31.493530    9750 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:48:31.507028    9750 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:48:31.507141    9750 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:48:31.508557    9750 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 13:48:31.508569    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 13:48:31.534632    9750 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:48:31.534645    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 13:48:31.780874    9750 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 13:48:31.780913    9750 cache_images.go:92] duration metric: took 2.826909209s to LoadCachedImages
	W0318 13:48:31.780956    9750 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0318 13:48:31.780963    9750 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 13:48:31.781027    9750 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-813000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:48:31.781085    9750 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 13:48:31.794106    9750 cni.go:84] Creating CNI manager for ""
	I0318 13:48:31.794119    9750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:48:31.794124    9750 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:48:31.794132    9750 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-813000 NodeName:stopped-upgrade-813000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:48:31.794204    9750 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-813000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:48:31.794260    9750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 13:48:31.797650    9750 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:48:31.797681    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:48:31.800461    9750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 13:48:31.805323    9750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:48:31.810302    9750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 13:48:31.815797    9750 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 13:48:31.817078    9750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:48:31.820636    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:48:31.884244    9750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:48:31.894227    9750 certs.go:68] Setting up /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000 for IP: 10.0.2.15
	I0318 13:48:31.894237    9750 certs.go:194] generating shared ca certs ...
	I0318 13:48:31.894246    9750 certs.go:226] acquiring lock for ca certs: {Name:mkb77ca79ad1917526a647bf0189e0c89f5a836a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:48:31.894399    9750 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.key
	I0318 13:48:31.895203    9750 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/proxy-client-ca.key
	I0318 13:48:31.895211    9750 certs.go:256] generating profile certs ...
	I0318 13:48:31.895407    9750 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.key
	I0318 13:48:31.895429    9750 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key.b3f91078
	I0318 13:48:31.895442    9750 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt.b3f91078 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 13:48:32.086926    9750 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt.b3f91078 ...
	I0318 13:48:32.086945    9750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt.b3f91078: {Name:mkf4eae5165cc01f8e05b702f75f9a115150bce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:48:32.087278    9750 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key.b3f91078 ...
	I0318 13:48:32.087283    9750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key.b3f91078: {Name:mkeb2db62c86a688fb8027b3cb32820cacd322df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:48:32.087401    9750 certs.go:381] copying /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt.b3f91078 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt
	I0318 13:48:32.087604    9750 certs.go:385] copying /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key.b3f91078 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key
	I0318 13:48:32.087995    9750 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/proxy-client.key
	I0318 13:48:32.088172    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/7236.pem (1338 bytes)
	W0318 13:48:32.088405    9750 certs.go:480] ignoring /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/7236_empty.pem, impossibly tiny 0 bytes
	I0318 13:48:32.088414    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:48:32.088439    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:48:32.088472    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:48:32.088493    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/key.pem (1679 bytes)
	I0318 13:48:32.088548    9750 certs.go:484] found cert: /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem (1708 bytes)
	I0318 13:48:32.088921    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:48:32.096100    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:48:32.102817    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:48:32.109878    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:48:32.116340    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:48:32.122527    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:48:32.130047    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:48:32.137433    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:48:32.144363    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/ssl/certs/72362.pem --> /usr/share/ca-certificates/72362.pem (1708 bytes)
	I0318 13:48:32.150990    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:48:32.157499    9750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/7236.pem --> /usr/share/ca-certificates/7236.pem (1338 bytes)
	I0318 13:48:32.164069    9750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:48:32.168942    9750 ssh_runner.go:195] Run: openssl version
	I0318 13:48:32.170830    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72362.pem && ln -fs /usr/share/ca-certificates/72362.pem /etc/ssl/certs/72362.pem"
	I0318 13:48:32.174365    9750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72362.pem
	I0318 13:48:32.175911    9750 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:31 /usr/share/ca-certificates/72362.pem
	I0318 13:48:32.175933    9750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72362.pem
	I0318 13:48:32.177609    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:48:32.180578    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:48:32.183440    9750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:48:32.184990    9750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:44 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:48:32.185011    9750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:48:32.186617    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:48:32.189912    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7236.pem && ln -fs /usr/share/ca-certificates/7236.pem /etc/ssl/certs/7236.pem"
	I0318 13:48:32.193123    9750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7236.pem
	I0318 13:48:32.194629    9750 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:31 /usr/share/ca-certificates/7236.pem
	I0318 13:48:32.194649    9750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7236.pem
	I0318 13:48:32.196763    9750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7236.pem /etc/ssl/certs/51391683.0"
	I0318 13:48:32.199522    9750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:48:32.200899    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:48:32.202760    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:48:32.204865    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:48:32.206868    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:48:32.208607    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:48:32.210279    9750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:48:32.212187    9750 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 13:48:32.212252    9750 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:48:32.222241    9750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:48:32.225348    9750 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:48:32.225355    9750 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:48:32.225357    9750 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:48:32.225380    9750 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:48:32.228003    9750 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:48:32.228309    9750 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-813000" does not appear in /Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:48:32.228413    9750 kubeconfig.go:62] /Users/jenkins/minikube-integration/18421-6777/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-813000" cluster setting kubeconfig missing "stopped-upgrade-813000" context setting]
	I0318 13:48:32.228623    9750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/kubeconfig: {Name:mk6a62990bf9328d54440f15380010f8199a9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:48:32.229042    9750 kapi.go:59] client config for stopped-upgrade-813000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.key", CAFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105e86a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:48:32.229478    9750 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:48:32.232029    9750 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-813000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 13:48:32.232035    9750 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:48:32.232067    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:48:32.243008    9750 docker.go:483] Stopping containers: [d40fae90d1aa ba3504103d36 9353fb6ad2b7 9e22a05ae9a3 67619eb167c0 a67d887e308c d6a44a7b025e b531e5fe4674]
	I0318 13:48:32.243065    9750 ssh_runner.go:195] Run: docker stop d40fae90d1aa ba3504103d36 9353fb6ad2b7 9e22a05ae9a3 67619eb167c0 a67d887e308c d6a44a7b025e b531e5fe4674
	I0318 13:48:32.256084    9750 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:48:32.261863    9750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:48:32.265199    9750 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:48:32.265205    9750 kubeadm.go:156] found existing configuration files:
	
	I0318 13:48:32.265232    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/admin.conf
	I0318 13:48:32.267648    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:48:32.267671    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:48:32.270328    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/kubelet.conf
	I0318 13:48:32.272944    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:48:32.272963    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:48:32.275317    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/controller-manager.conf
	I0318 13:48:32.278231    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:48:32.278255    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:48:32.281383    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/scheduler.conf
	I0318 13:48:32.283866    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:48:32.283893    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:48:32.286651    9750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:48:32.289673    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.312776    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.702117    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.834775    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.857164    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:48:32.879218    9750 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:48:32.879302    9750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:48:33.381379    9750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:48:33.881350    9750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:48:33.885317    9750 api_server.go:72] duration metric: took 1.006106166s to wait for apiserver process to appear ...
	I0318 13:48:33.885327    9750 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:48:33.885341    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:38.887465    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:38.887533    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:43.887924    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:43.887966    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:48.888359    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:48.888394    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:53.888990    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:53.889092    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:48:58.889975    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:48:58.889994    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:03.890659    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:03.890751    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:08.892282    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:08.892342    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:13.894043    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:13.894124    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:18.897211    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:18.897318    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:23.898848    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:23.898892    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:28.899226    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:28.899269    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:33.901537    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:33.901754    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:33.917526    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:49:33.917590    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:33.930319    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:49:33.930401    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:33.941218    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:49:33.941286    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:33.954420    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:49:33.954500    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:33.964735    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:49:33.964801    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:33.975779    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:49:33.975844    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:33.985814    9750 logs.go:276] 0 containers: []
	W0318 13:49:33.985825    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:33.985893    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:34.000256    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:49:34.000278    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:34.000283    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:34.111913    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:49:34.111927    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:49:34.126336    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:34.126345    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:34.164971    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:34.164985    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:34.169516    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:34.169523    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:34.194419    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:49:34.194427    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:49:34.209287    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:49:34.209298    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:49:34.253713    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:49:34.253725    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:49:34.268382    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:49:34.268392    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:49:34.283560    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:49:34.283569    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:49:34.301543    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:49:34.301554    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:49:34.317456    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:49:34.317465    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:34.329324    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:49:34.329335    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:49:34.340878    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:49:34.340889    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:49:34.353721    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:49:34.353739    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:49:34.369455    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:49:34.369465    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:49:36.887084    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:41.887388    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:41.887524    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:41.899423    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:49:41.899498    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:41.910366    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:49:41.910443    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:41.921573    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:49:41.921647    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:41.933219    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:49:41.933290    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:41.944497    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:49:41.944590    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:41.956346    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:49:41.956413    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:41.967469    9750 logs.go:276] 0 containers: []
	W0318 13:49:41.967482    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:41.967540    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:41.978949    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:49:41.978967    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:49:41.978973    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:49:41.994576    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:49:41.994595    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:49:42.009897    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:49:42.009908    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:49:42.028106    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:49:42.028118    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:49:42.043154    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:42.043170    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:42.047708    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:49:42.047719    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:49:42.090956    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:42.090984    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:42.117367    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:49:42.117390    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:42.130026    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:42.130037    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:42.172834    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:42.172853    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:42.214090    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:49:42.214104    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:49:42.230403    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:49:42.230424    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:49:42.245125    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:49:42.245142    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:49:42.257127    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:49:42.257140    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:49:42.273633    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:49:42.273643    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:49:42.286473    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:49:42.286489    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:49:44.802088    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:49.804084    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:49.804358    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:49.835272    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:49:49.835391    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:49.851369    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:49:49.851458    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:49.864555    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:49:49.864630    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:49.875598    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:49:49.875668    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:49.885795    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:49:49.885860    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:49.896285    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:49:49.896369    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:49.906331    9750 logs.go:276] 0 containers: []
	W0318 13:49:49.906342    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:49.906403    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:49.916753    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:49:49.916772    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:49:49.916777    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:49:49.930452    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:49:49.930462    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:49:49.968521    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:49.968532    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:50.004829    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:49:50.004839    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:49:50.016445    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:49:50.016456    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:49:50.028324    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:50.028335    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:49:50.054094    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:49:50.054107    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:50.066269    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:50.066279    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:50.070996    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:49:50.071006    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:49:50.088803    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:49:50.088813    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:49:50.100467    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:49:50.100477    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:49:50.115973    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:49:50.115983    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:49:50.133205    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:50.133217    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:50.171234    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:49:50.171254    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:49:50.185639    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:49:50.185649    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:49:50.200354    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:49:50.200366    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:49:52.714196    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:49:57.716780    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:49:57.716894    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:49:57.727357    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:49:57.727430    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:49:57.738075    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:49:57.738150    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:49:57.749312    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:49:57.749379    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:49:57.777507    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:49:57.777586    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:49:57.788160    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:49:57.788234    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:49:57.798798    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:49:57.798867    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:49:57.816971    9750 logs.go:276] 0 containers: []
	W0318 13:49:57.816981    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:49:57.817037    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:49:57.829185    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:49:57.829204    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:49:57.829209    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:49:57.871737    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:49:57.871757    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:49:57.875937    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:49:57.875943    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:49:57.911110    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:49:57.911121    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:49:57.923017    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:49:57.923028    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:49:57.934975    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:49:57.934987    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:49:57.947397    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:49:57.947409    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:49:57.962141    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:49:57.962152    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:49:57.976407    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:49:57.976417    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:49:57.988789    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:49:57.988801    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:49:58.003610    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:49:58.003623    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:49:58.043729    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:49:58.043745    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:49:58.062732    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:49:58.062742    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:49:58.078882    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:49:58.078892    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:49:58.091186    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:49:58.091196    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:49:58.108671    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:49:58.108686    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:00.632323    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:05.634665    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:05.634928    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:05.662499    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:05.662608    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:05.680648    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:05.680729    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:05.693549    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:05.693637    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:05.705825    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:05.705895    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:05.716704    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:05.716774    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:05.727166    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:05.727237    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:05.742883    9750 logs.go:276] 0 containers: []
	W0318 13:50:05.742895    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:05.742958    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:05.755177    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:05.755194    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:05.755200    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:05.767163    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:05.767173    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:05.778616    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:05.778626    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:05.816788    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:05.816802    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:05.838101    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:05.838111    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:05.853453    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:05.853465    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:05.866243    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:05.866254    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:05.905315    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:05.905326    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:05.909454    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:05.909461    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:05.923452    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:05.923463    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:05.943090    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:05.943099    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:05.957977    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:05.957988    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:05.996316    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:05.996325    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:06.011404    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:06.011415    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:06.022452    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:06.022461    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:06.045997    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:06.046006    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:08.559193    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:13.561191    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:13.561336    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:13.573673    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:13.573733    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:13.584404    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:13.584469    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:13.594504    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:13.594558    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:13.609387    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:13.609454    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:13.620174    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:13.620242    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:13.630241    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:13.630306    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:13.640931    9750 logs.go:276] 0 containers: []
	W0318 13:50:13.640943    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:13.641001    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:13.651565    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:13.651581    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:13.651587    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:13.668160    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:13.668169    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:13.705632    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:13.705645    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:13.710263    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:13.710274    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:13.724387    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:13.724400    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:13.737852    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:13.737863    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:13.749836    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:13.749846    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:13.761444    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:13.761460    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:13.781613    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:13.781627    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:13.793894    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:13.793907    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:13.809896    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:13.809907    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:13.834349    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:13.834356    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:13.870820    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:13.870832    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:13.885271    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:13.885281    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:13.923583    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:13.923594    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:13.935233    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:13.935244    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:16.449581    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:21.451902    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:21.452068    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:21.466889    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:21.466969    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:21.479091    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:21.479159    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:21.491378    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:21.491441    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:21.501859    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:21.501928    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:21.512634    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:21.512702    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:21.523090    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:21.523159    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:21.532854    9750 logs.go:276] 0 containers: []
	W0318 13:50:21.532865    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:21.532920    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:21.543870    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:21.543889    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:21.543894    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:21.558202    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:21.558215    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:21.570083    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:21.570095    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:21.584334    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:21.584345    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:21.588959    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:21.588968    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:21.628012    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:21.628024    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:21.640167    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:21.640181    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:21.651381    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:21.651391    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:21.663243    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:21.663254    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:21.701880    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:21.701891    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:21.716142    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:21.716152    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:21.731223    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:21.731235    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:21.755390    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:21.755399    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:21.766748    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:21.766760    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:21.804586    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:21.804599    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:21.820119    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:21.820133    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:24.342803    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:29.345131    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:29.345316    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:29.358688    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:29.358775    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:29.369710    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:29.369775    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:29.380216    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:29.380285    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:29.391104    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:29.391172    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:29.401073    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:29.401145    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:29.411550    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:29.411610    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:29.421799    9750 logs.go:276] 0 containers: []
	W0318 13:50:29.421817    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:29.421874    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:29.432088    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:29.432107    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:29.432112    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:29.443358    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:29.443368    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:29.467601    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:29.467613    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:29.504820    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:29.504831    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:29.521350    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:29.521362    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:29.535755    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:29.535767    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:29.546788    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:29.546801    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:29.562298    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:29.562311    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:29.580470    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:29.580481    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:29.594955    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:29.594967    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:29.599179    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:29.599189    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:29.617412    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:29.617424    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:29.659133    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:29.659144    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:29.670775    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:29.670786    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:29.682412    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:29.682422    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:29.720620    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:29.720629    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:32.233804    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:37.236175    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:37.236430    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:37.272550    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:37.272651    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:37.287877    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:37.287961    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:37.300882    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:37.300950    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:37.311610    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:37.311682    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:37.322354    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:37.322422    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:37.333036    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:37.333104    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:37.351702    9750 logs.go:276] 0 containers: []
	W0318 13:50:37.351716    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:37.351773    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:37.362396    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:37.362412    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:37.362420    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:37.399111    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:37.399123    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:37.403155    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:37.403163    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:37.439108    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:37.439121    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:37.456905    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:37.456918    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:37.468859    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:37.468873    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:37.484177    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:37.484187    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:37.495905    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:37.495917    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:37.509357    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:37.509367    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:37.523946    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:37.523958    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:37.537848    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:37.537860    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:37.552959    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:37.552971    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:37.564562    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:37.564572    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:37.602359    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:37.602370    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:37.614080    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:37.614091    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:37.626429    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:37.626440    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:40.151324    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:45.153646    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:45.153806    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:45.173748    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:45.173841    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:45.187257    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:45.187335    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:45.198731    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:45.198804    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:45.209908    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:45.209984    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:45.220274    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:45.220342    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:45.232567    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:45.232637    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:45.243069    9750 logs.go:276] 0 containers: []
	W0318 13:50:45.243080    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:45.243136    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:45.254275    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:45.254293    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:45.254298    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:45.294935    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:45.294945    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:45.308948    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:45.308958    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:45.321082    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:45.321092    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:45.332832    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:45.332843    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:45.370946    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:45.370955    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:45.385498    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:45.385510    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:45.401365    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:45.401374    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:45.419004    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:45.419014    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:45.443288    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:45.443295    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:45.457288    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:45.457300    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:45.495375    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:45.495386    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:45.507604    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:45.507614    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:45.511772    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:45.511779    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:45.523209    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:45.523219    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:45.538078    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:45.538088    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:48.051347    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:50:53.053649    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:50:53.054073    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:50:53.094379    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:50:53.094520    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:50:53.116689    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:50:53.116783    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:50:53.132728    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:50:53.132809    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:50:53.145642    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:50:53.145714    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:50:53.156896    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:50:53.156971    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:50:53.171067    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:50:53.171144    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:50:53.182373    9750 logs.go:276] 0 containers: []
	W0318 13:50:53.182385    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:53.182446    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:50:53.193310    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:50:53.193329    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:50:53.193335    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:50:53.207781    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:50:53.207791    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:50:53.219653    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:50:53.219665    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:50:53.235917    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:50:53.235930    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:50:53.272948    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:50:53.272958    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:50:53.284030    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:50:53.284040    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:50:53.295548    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:50:53.295559    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:50:53.313575    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:50:53.313585    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:50:53.329474    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:50:53.329484    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:50:53.352712    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:53.352720    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:53.388668    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:50:53.388679    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:50:53.404591    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:53.404601    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:53.408704    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:50:53.408710    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:50:53.425484    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:50:53.425494    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:50:53.444655    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:50:53.444664    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:53.456847    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:53.456856    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:50:55.994559    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:00.996868    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:00.996995    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:01.010790    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:01.010862    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:01.024097    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:01.024177    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:01.040629    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:01.040692    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:01.052445    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:01.052506    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:01.062362    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:01.062429    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:01.077649    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:01.077724    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:01.090014    9750 logs.go:276] 0 containers: []
	W0318 13:51:01.090027    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.090079    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:01.100685    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:01.100701    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:01.100706    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:01.115034    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:01.115045    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:01.132728    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:01.132739    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:01.144597    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.144609    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.182876    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.182888    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.187409    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:01.187415    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:01.228077    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:01.228091    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:01.243118    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:01.243129    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:01.267245    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:01.267255    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:01.280996    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:01.281005    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:01.296364    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:01.296375    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:01.309838    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.309850    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:01.345316    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:01.345326    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:01.361706    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:01.361723    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:01.373849    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:01.373862    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:01.385274    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:01.385283    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:03.899622    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:08.900760    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:08.901015    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:08.926122    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:08.926243    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:08.947264    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:08.947341    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:08.959585    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:08.959657    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:08.970428    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:08.970502    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:08.980473    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:08.980536    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:08.990826    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:08.990889    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:09.001224    9750 logs.go:276] 0 containers: []
	W0318 13:51:09.001238    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:09.001291    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:09.011445    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:09.011465    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:09.011471    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:09.023313    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:09.023326    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:09.038812    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:09.038824    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:09.050709    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:09.050719    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:09.085428    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:09.085438    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:09.099538    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:09.099548    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:09.113847    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:09.113860    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:09.131111    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:09.131122    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:09.155289    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:09.155300    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:09.159690    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:09.159699    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:09.198993    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:09.199004    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:09.214543    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:09.214556    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:09.254107    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:09.254127    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:09.270407    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:09.270417    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:09.282602    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:09.282613    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:09.294201    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:09.294213    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:11.810955    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:16.813304    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:16.813548    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:16.833411    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:16.833494    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:16.847612    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:16.847691    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:16.859512    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:16.859581    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:16.870275    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:16.870342    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:16.880540    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:16.880607    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:16.890862    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:16.890926    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:16.900626    9750 logs.go:276] 0 containers: []
	W0318 13:51:16.900637    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:16.900690    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:16.910872    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:16.910888    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:16.910893    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:16.922697    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:16.922711    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:16.947029    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:16.947036    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:16.951239    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:16.951247    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:16.989108    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:16.989119    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:17.000520    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:17.000532    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:17.015097    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:17.015107    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:17.026783    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.026794    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:17.060796    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:17.060808    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:17.072016    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:17.072028    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:17.097026    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:17.097038    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:17.114468    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.114483    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.150946    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:17.150954    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:17.166296    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:17.166306    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:17.184469    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.184482    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.196324    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:17.196335    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:19.711720    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:24.714026    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:24.714253    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:24.730378    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:24.730470    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:24.743512    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:24.743586    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:24.754198    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:24.754270    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:24.765067    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:24.765125    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:24.775502    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:24.775559    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:24.786658    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:24.786724    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:24.796959    9750 logs.go:276] 0 containers: []
	W0318 13:51:24.796971    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:24.797026    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:24.807086    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:24.807110    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:24.807116    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:24.845411    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:24.845420    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:24.857570    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:24.857580    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:24.872438    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:24.872448    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:24.883696    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:24.883706    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:24.898834    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:24.898844    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:24.934754    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:24.934766    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:24.949144    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:24.949157    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:24.960425    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:24.960438    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:24.977334    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:24.977345    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:24.981913    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:24.981919    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:24.996101    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:24.996111    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:25.007343    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:25.007353    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:25.031070    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:25.031077    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:25.044912    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:25.044925    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:25.084411    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:25.084422    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:27.598117    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:32.600777    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:32.600891    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:32.616018    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:32.616090    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:32.626265    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:32.626337    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:32.636594    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:32.636663    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:32.647138    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:32.647207    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:32.660627    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:32.660698    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:32.670997    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:32.671068    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:32.681304    9750 logs.go:276] 0 containers: []
	W0318 13:51:32.681315    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:32.681371    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:32.697415    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:32.697435    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:32.697441    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:32.711333    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:32.711345    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:32.726792    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:32.726805    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:32.738794    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:32.738806    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:32.773219    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:32.773231    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:32.785452    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:32.785464    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:32.802997    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:32.803008    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:32.814526    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:32.814536    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:32.853289    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:32.853300    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:32.869168    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:32.869178    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:32.906987    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:32.906997    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:32.911164    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:32.911170    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:32.924844    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:32.924854    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:32.936772    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:32.936782    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:32.952300    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:32.952312    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:32.967088    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:32.967097    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:35.492626    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:40.494873    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:40.495010    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:40.506825    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:40.506902    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:40.518193    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:40.518260    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:40.529117    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:40.529189    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:40.539593    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:40.539667    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:40.550190    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:40.550260    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:40.560885    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:40.560957    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:40.572333    9750 logs.go:276] 0 containers: []
	W0318 13:51:40.572343    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:40.572402    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:40.582832    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:40.582852    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:40.582858    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:40.621432    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:40.621441    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:40.635934    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:40.635944    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:40.646737    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:40.646749    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:40.664985    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:40.664996    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:40.687261    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:40.687269    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:40.699533    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:40.699544    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:40.713837    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:40.713852    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:40.725668    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:40.725678    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:40.742938    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:40.742948    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:40.754866    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:40.754877    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:40.768869    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:40.768879    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:40.780285    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:40.780298    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:40.784279    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:40.784285    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:40.817319    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:40.817331    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:40.854749    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:40.854759    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:43.371837    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:48.374131    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:48.374268    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:48.387344    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:48.387412    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:48.401600    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:48.401673    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:48.412231    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:48.412298    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:48.422470    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:48.422536    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:48.436808    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:48.436866    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:48.447610    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:48.447688    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:48.459108    9750 logs.go:276] 0 containers: []
	W0318 13:51:48.459119    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.459179    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:48.485112    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:48.485133    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:48.485138    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:48.527278    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.527298    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:48.562762    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:48.562774    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:48.581394    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:48.581405    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:48.596124    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:48.596134    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:48.614665    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:48.614678    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:48.629110    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:48.629121    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:48.640001    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:48.640011    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:48.651424    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.651434    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.655484    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:48.655489    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:48.693169    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:48.693182    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:48.709012    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:48.709022    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:48.721690    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:48.721703    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:48.747606    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:48.747617    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:48.760297    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:48.760305    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:48.783818    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.783828    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:51.298396    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:51:56.300725    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:51:56.300901    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:51:56.317115    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:51:56.317197    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:51:56.332718    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:51:56.332787    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:51:56.342950    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:51:56.343021    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:51:56.353644    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:51:56.353714    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:51:56.364495    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:51:56.364580    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:51:56.375161    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:51:56.375226    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:51:56.385500    9750 logs.go:276] 0 containers: []
	W0318 13:51:56.385512    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:56.385587    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:51:56.396096    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:51:56.396118    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:51:56.396124    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:51:56.411541    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:51:56.411551    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:51:56.434568    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:51:56.434574    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:51:56.484966    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:51:56.484976    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:51:56.503305    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:51:56.503320    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:51:56.527968    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:56.527984    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:56.534835    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:51:56.534848    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:51:56.553724    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:51:56.553735    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:51:56.568849    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:51:56.568860    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:51:56.581724    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:51:56.581736    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:56.595106    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:56.595120    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:56.635352    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:56.635366    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:51:56.673257    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:51:56.673268    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:51:56.688638    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:51:56.688653    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:51:56.716240    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:51:56.716253    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:51:56.732618    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:51:56.732632    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:51:59.246868    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:04.248793    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:04.248943    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:04.260246    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:52:04.260316    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:04.271020    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:52:04.271094    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:04.284315    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:52:04.284383    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:04.294744    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:52:04.294811    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:04.305222    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:52:04.305291    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:04.316227    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:52:04.316292    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:04.326561    9750 logs.go:276] 0 containers: []
	W0318 13:52:04.326572    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.326624    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:04.338093    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:52:04.338113    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:52:04.338118    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:52:04.349150    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.349160    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.353689    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.353696    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:04.392504    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:52:04.392518    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:52:04.406775    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:52:04.406786    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:52:04.418962    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:52:04.418977    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:52:04.436848    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:52:04.436861    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:52:04.451654    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:04.451666    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:04.475285    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.475299    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:04.488253    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:52:04.488264    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:52:04.506702    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:52:04.506716    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:52:04.519165    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.519177    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.563811    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:52:04.563822    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:52:04.603974    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:52:04.603990    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:52:04.619995    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:52:04.620006    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:52:04.636919    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:52:04.636928    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:52:07.155581    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:12.158224    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:12.158453    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:12.177015    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:52:12.177102    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:12.192402    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:52:12.192473    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:12.203799    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:52:12.203862    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:12.214294    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:52:12.214362    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:12.224574    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:52:12.224641    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:12.235637    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:52:12.235701    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:12.247127    9750 logs.go:276] 0 containers: []
	W0318 13:52:12.247140    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:12.247195    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:12.257590    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:52:12.257609    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:12.257615    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:12.296676    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:52:12.296687    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:52:12.312219    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:52:12.312231    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:52:12.324543    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:52:12.324556    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:52:12.337323    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:52:12.337336    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:52:12.353397    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:52:12.353407    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:52:12.377953    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:12.377968    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:12.416007    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:52:12.416016    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:52:12.447284    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:52:12.447295    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:52:12.472361    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:12.472368    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:12.477095    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:52:12.477106    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:52:12.515766    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:12.515784    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:12.539632    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:52:12.539650    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:52:12.553976    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:52:12.553990    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:52:12.570256    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:52:12.570269    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:52:12.586427    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:52:12.586438    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:15.100996    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:20.103505    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:20.103728    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:20.120255    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:52:20.120344    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:20.132876    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:52:20.132946    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:20.144021    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:52:20.144093    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:20.155542    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:52:20.155616    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:20.167727    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:52:20.167800    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:20.184325    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:52:20.184362    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:20.196933    9750 logs.go:276] 0 containers: []
	W0318 13:52:20.196943    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:20.196990    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:20.215760    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:52:20.215778    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:52:20.215784    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:52:20.230399    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:52:20.230409    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:52:20.244240    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:52:20.244254    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:52:20.258660    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:52:20.258673    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:20.270755    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:20.270767    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:20.307618    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:52:20.307630    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:52:20.322644    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:52:20.322655    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:52:20.337766    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:52:20.337777    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:52:20.354282    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:20.354296    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:20.392780    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:20.392798    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:20.397828    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:52:20.397837    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:52:20.410131    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:52:20.410143    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:52:20.422394    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:52:20.422407    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:52:20.437758    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:20.437774    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:20.462171    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:52:20.462186    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:52:20.501808    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:52:20.501824    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:52:23.021568    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:28.022467    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:28.022779    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:52:28.046019    9750 logs.go:276] 2 containers: [5e53b215e10c 9e22a05ae9a3]
	I0318 13:52:28.046127    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:52:28.062137    9750 logs.go:276] 2 containers: [e1c3ff9be20d 9353fb6ad2b7]
	I0318 13:52:28.062217    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:52:28.075559    9750 logs.go:276] 1 containers: [f98a8f44d297]
	I0318 13:52:28.075632    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:52:28.087419    9750 logs.go:276] 2 containers: [bbed0dcf4649 d40fae90d1aa]
	I0318 13:52:28.087490    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:52:28.098842    9750 logs.go:276] 1 containers: [a60ef7691fe6]
	I0318 13:52:28.098913    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:52:28.110158    9750 logs.go:276] 2 containers: [c02c609fc8eb ba3504103d36]
	I0318 13:52:28.110225    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:52:28.121162    9750 logs.go:276] 0 containers: []
	W0318 13:52:28.121174    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:28.121235    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:52:28.133402    9750 logs.go:276] 1 containers: [657d2055fda5]
	I0318 13:52:28.133471    9750 logs.go:123] Gathering logs for kube-apiserver [9e22a05ae9a3] ...
	I0318 13:52:28.133481    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e22a05ae9a3"
	I0318 13:52:28.185428    9750 logs.go:123] Gathering logs for etcd [e1c3ff9be20d] ...
	I0318 13:52:28.185445    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c3ff9be20d"
	I0318 13:52:28.200489    9750 logs.go:123] Gathering logs for kube-scheduler [d40fae90d1aa] ...
	I0318 13:52:28.200505    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d40fae90d1aa"
	I0318 13:52:28.217235    9750 logs.go:123] Gathering logs for etcd [9353fb6ad2b7] ...
	I0318 13:52:28.217252    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9353fb6ad2b7"
	I0318 13:52:28.233097    9750 logs.go:123] Gathering logs for coredns [f98a8f44d297] ...
	I0318 13:52:28.233108    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f98a8f44d297"
	I0318 13:52:28.245263    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:28.245289    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:28.250278    9750 logs.go:123] Gathering logs for kube-apiserver [5e53b215e10c] ...
	I0318 13:52:28.250286    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e53b215e10c"
	I0318 13:52:28.272374    9750 logs.go:123] Gathering logs for kube-scheduler [bbed0dcf4649] ...
	I0318 13:52:28.272388    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbed0dcf4649"
	I0318 13:52:28.285387    9750 logs.go:123] Gathering logs for kube-proxy [a60ef7691fe6] ...
	I0318 13:52:28.285399    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a60ef7691fe6"
	I0318 13:52:28.298258    9750 logs.go:123] Gathering logs for storage-provisioner [657d2055fda5] ...
	I0318 13:52:28.298270    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657d2055fda5"
	I0318 13:52:28.312867    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:52:28.312887    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:28.326299    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:28.326312    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:28.364348    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:28.364358    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:52:28.401777    9750 logs.go:123] Gathering logs for kube-controller-manager [c02c609fc8eb] ...
	I0318 13:52:28.401788    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c02c609fc8eb"
	I0318 13:52:28.419085    9750 logs.go:123] Gathering logs for kube-controller-manager [ba3504103d36] ...
	I0318 13:52:28.419096    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba3504103d36"
	I0318 13:52:28.434166    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:52:28.434176    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:52:30.959622    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:35.960175    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:35.960203    9750 kubeadm.go:591] duration metric: took 4m3.736581041s to restartPrimaryControlPlane
	W0318 13:52:35.960230    9750 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:52:35.960244    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 13:52:37.016284    9750 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.056034416s)
	I0318 13:52:37.016365    9750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:52:37.021221    9750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:52:37.023987    9750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:52:37.026719    9750 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:52:37.026725    9750 kubeadm.go:156] found existing configuration files:
	
	I0318 13:52:37.026749    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/admin.conf
	I0318 13:52:37.029370    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:52:37.029400    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:52:37.032250    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/kubelet.conf
	I0318 13:52:37.034736    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:52:37.034758    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:52:37.037758    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/controller-manager.conf
	I0318 13:52:37.040888    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:52:37.040911    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:52:37.043779    9750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/scheduler.conf
	I0318 13:52:37.046163    9750 kubeadm.go:162] "https://control-plane.minikube.internal:51361" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51361 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:52:37.046184    9750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:52:37.049365    9750 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:52:37.067978    9750 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 13:52:37.068012    9750 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:52:37.116096    9750 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:52:37.116152    9750 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:52:37.116199    9750 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:52:37.164389    9750 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:52:37.173599    9750 out.go:204]   - Generating certificates and keys ...
	I0318 13:52:37.173635    9750 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:52:37.173666    9750 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:52:37.173709    9750 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:52:37.173753    9750 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:52:37.173791    9750 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:52:37.173819    9750 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:52:37.173860    9750 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:52:37.173892    9750 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:52:37.173933    9750 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:52:37.173975    9750 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:52:37.174007    9750 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:52:37.174043    9750 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:52:37.207222    9750 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:52:37.311938    9750 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:52:37.448189    9750 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:52:37.502096    9750 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:52:37.532414    9750 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:52:37.532834    9750 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:52:37.532868    9750 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:52:37.625907    9750 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:52:37.635072    9750 out.go:204]   - Booting up control plane ...
	I0318 13:52:37.635129    9750 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:52:37.635173    9750 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:52:37.635208    9750 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:52:37.635277    9750 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:52:37.635361    9750 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:52:42.130769    9750 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501555 seconds
	I0318 13:52:42.130862    9750 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:52:42.136249    9750 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:52:42.657774    9750 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:52:42.658230    9750 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-813000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:52:43.162542    9750 kubeadm.go:309] [bootstrap-token] Using token: vvdmxl.j3rogto4uypt18n2
	I0318 13:52:43.169046    9750 out.go:204]   - Configuring RBAC rules ...
	I0318 13:52:43.169112    9750 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:52:43.169166    9750 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:52:43.177549    9750 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:52:43.178380    9750 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:52:43.179267    9750 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:52:43.180087    9750 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:52:43.183735    9750 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:52:43.380248    9750 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:52:43.566436    9750 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:52:43.566996    9750 kubeadm.go:309] 
	I0318 13:52:43.567071    9750 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:52:43.567082    9750 kubeadm.go:309] 
	I0318 13:52:43.567125    9750 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:52:43.567135    9750 kubeadm.go:309] 
	I0318 13:52:43.567150    9750 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:52:43.567193    9750 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:52:43.567220    9750 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:52:43.567224    9750 kubeadm.go:309] 
	I0318 13:52:43.567253    9750 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:52:43.567256    9750 kubeadm.go:309] 
	I0318 13:52:43.567286    9750 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:52:43.567290    9750 kubeadm.go:309] 
	I0318 13:52:43.567346    9750 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:52:43.567388    9750 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:52:43.567450    9750 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:52:43.567457    9750 kubeadm.go:309] 
	I0318 13:52:43.567498    9750 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:52:43.567540    9750 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:52:43.567543    9750 kubeadm.go:309] 
	I0318 13:52:43.567583    9750 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vvdmxl.j3rogto4uypt18n2 \
	I0318 13:52:43.567643    9750 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f245f57130bb8b4395382cd74200f36af238eb522c12e31804ffbb421429194 \
	I0318 13:52:43.567653    9750 kubeadm.go:309] 	--control-plane 
	I0318 13:52:43.567657    9750 kubeadm.go:309] 
	I0318 13:52:43.567726    9750 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:52:43.567732    9750 kubeadm.go:309] 
	I0318 13:52:43.567776    9750 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vvdmxl.j3rogto4uypt18n2 \
	I0318 13:52:43.567834    9750 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6f245f57130bb8b4395382cd74200f36af238eb522c12e31804ffbb421429194 
	I0318 13:52:43.567893    9750 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:52:43.567903    9750 cni.go:84] Creating CNI manager for ""
	I0318 13:52:43.567911    9750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:52:43.572394    9750 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:52:43.580362    9750 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:52:43.583248    9750 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:52:43.587924    9750 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:52:43.587967    9750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-813000 minikube.k8s.io/updated_at=2024_03_18T13_52_43_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=stopped-upgrade-813000 minikube.k8s.io/primary=true
	I0318 13:52:43.587968    9750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:52:43.625531    9750 kubeadm.go:1107] duration metric: took 37.598792ms to wait for elevateKubeSystemPrivileges
	I0318 13:52:43.629441    9750 ops.go:34] apiserver oom_adj: -16
	W0318 13:52:43.629464    9750 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:52:43.629469    9750 kubeadm.go:393] duration metric: took 4m11.419065459s to StartCluster
	I0318 13:52:43.629479    9750 settings.go:142] acquiring lock: {Name:mkb16a292265123b9734bd031ef06799b38c3f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:52:43.629561    9750 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:52:43.629966    9750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/kubeconfig: {Name:mk6a62990bf9328d54440f15380010f8199a9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:52:43.630166    9750 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:52:43.634412    9750 out.go:177] * Verifying Kubernetes components...
	I0318 13:52:43.630226    9750 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:52:43.630260    9750 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:52:43.642228    9750 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-813000"
	I0318 13:52:43.642234    9750 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-813000"
	I0318 13:52:43.642231    9750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:52:43.642245    9750 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-813000"
	W0318 13:52:43.642265    9750 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:52:43.642247    9750 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-813000"
	I0318 13:52:43.642281    9750 host.go:66] Checking if "stopped-upgrade-813000" exists ...
	I0318 13:52:43.643523    9750 kapi.go:59] client config for stopped-upgrade-813000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/stopped-upgrade-813000/client.key", CAFile:"/Users/jenkins/minikube-integration/18421-6777/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105e86a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:52:43.643662    9750 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-813000"
	W0318 13:52:43.643667    9750 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:52:43.643673    9750 host.go:66] Checking if "stopped-upgrade-813000" exists ...
	I0318 13:52:43.648336    9750 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:52:43.652401    9750 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:52:43.652409    9750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:52:43.652416    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:52:43.653063    9750 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:52:43.653069    9750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:52:43.653073    9750 sshutil.go:53] new ssh client: &{IP:localhost Port:51326 SSHKeyPath:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/stopped-upgrade-813000/id_rsa Username:docker}
	I0318 13:52:43.741828    9750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:52:43.748720    9750 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:52:43.748771    9750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:43.752560    9750 api_server.go:72] duration metric: took 122.382875ms to wait for apiserver process to appear ...
	I0318 13:52:43.752567    9750 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:52:43.752573    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:43.784544    9750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:52:43.784544    9750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:52:48.754690    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:48.754727    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:53.755000    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:53.755022    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:52:58.755343    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:52:58.755416    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:03.755939    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:03.756002    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:08.756690    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:08.756716    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:13.757495    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:13.757550    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 13:53:14.177902    9750 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 13:53:14.183050    9750 out.go:177] * Enabled addons: storage-provisioner
	I0318 13:53:14.187689    9750 addons.go:505] duration metric: took 30.557646333s for enable addons: enabled=[storage-provisioner]
	I0318 13:53:18.758631    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:18.758677    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:23.760501    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:23.760588    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:28.762384    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:28.762429    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:33.764671    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:33.764706    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:38.766945    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:38.767005    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:43.769231    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:43.769369    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:43.793835    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:53:43.793917    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:43.808005    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:53:43.808076    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:43.818817    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:53:43.818891    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:43.829122    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:53:43.829192    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:43.839693    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:53:43.839766    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:43.850081    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:53:43.850139    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:43.860604    9750 logs.go:276] 0 containers: []
	W0318 13:53:43.860616    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:43.860671    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:43.870706    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:53:43.870724    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:43.870729    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:43.907533    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:43.907544    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:43.911408    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:43.911416    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:43.947721    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:53:43.947734    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:53:43.961792    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:53:43.961801    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:53:43.975516    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:53:43.975526    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:53:43.986779    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:53:43.986789    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:43.998067    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:53:43.998079    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:53:44.014827    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:53:44.014837    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:53:44.026347    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:53:44.026357    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:53:44.040291    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:53:44.040302    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:53:44.051826    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:53:44.051836    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:53:44.070842    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:44.070852    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:46.596118    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:51.598331    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:51.598583    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:51.623083    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:53:51.623173    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:51.636197    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:53:51.636267    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:51.647496    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:53:51.647554    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:51.658081    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:53:51.658153    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:51.668884    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:53:51.668949    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:51.678996    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:53:51.679052    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:51.688718    9750 logs.go:276] 0 containers: []
	W0318 13:53:51.688734    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:51.688796    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:51.699143    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:53:51.699160    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:51.699166    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:51.723644    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:53:51.723650    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:53:51.734926    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:53:51.734937    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:53:51.746243    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:51.746253    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:51.782733    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:53:51.782748    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:53:51.797232    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:53:51.797242    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:53:51.811621    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:53:51.811631    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:53:51.826160    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:53:51.826175    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:53:51.839848    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:53:51.839858    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:53:51.857832    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:51.857842    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:51.895791    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:51.895800    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:51.899990    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:53:51.899999    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:53:51.912128    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:53:51.912138    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:54.425683    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:53:59.427955    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:53:59.428105    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:53:59.439867    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:53:59.439932    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:53:59.450703    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:53:59.450766    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:53:59.462702    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:53:59.462765    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:53:59.473267    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:53:59.473335    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:53:59.483317    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:53:59.483384    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:53:59.493539    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:53:59.493600    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:53:59.503627    9750 logs.go:276] 0 containers: []
	W0318 13:53:59.503637    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:59.503691    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:53:59.517774    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:53:59.517789    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:53:59.517793    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:59.529217    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:59.529227    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:53:59.564520    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:53:59.564533    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:53:59.578643    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:53:59.578655    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:53:59.590824    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:53:59.590834    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:53:59.602336    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:53:59.602346    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:53:59.613646    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:53:59.613657    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:53:59.631012    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:53:59.631025    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:53:59.654623    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:59.654629    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:59.690528    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:59.690540    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:59.695053    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:53:59.695058    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:53:59.708815    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:53:59.708824    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:53:59.720363    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:53:59.720372    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:54:02.239535    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:54:07.241892    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:54:07.242190    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:54:07.282579    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:54:07.282696    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:54:07.301273    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:54:07.301385    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:54:07.315221    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:54:07.315302    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:54:07.326786    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:54:07.326852    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:54:07.338462    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:54:07.338525    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:54:07.349041    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:54:07.349102    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:54:07.359291    9750 logs.go:276] 0 containers: []
	W0318 13:54:07.359302    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:54:07.359354    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:54:07.369274    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:54:07.369289    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:54:07.369293    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:54:07.383597    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:54:07.383608    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:54:07.397490    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:54:07.397502    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:54:07.411724    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:54:07.411735    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:54:07.427822    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:54:07.427832    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:54:07.440564    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:54:07.440575    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:54:07.461280    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:54:07.461291    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:54:07.498779    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:54:07.498785    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:54:07.534694    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:54:07.534705    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:54:07.545900    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:54:07.545913    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:54:07.557457    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:54:07.557467    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:54:07.581097    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:54:07.581104    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:54:07.585438    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:54:07.585447    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:54:10.102750    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:54:15.105163    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:54:15.105513    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:54:15.141446    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:54:15.141586    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:54:15.163156    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:54:15.163240    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:54:15.178469    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:54:15.178545    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:54:15.190544    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:54:15.190616    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:54:15.201134    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:54:15.201196    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:54:15.212514    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:54:15.212586    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:54:15.222705    9750 logs.go:276] 0 containers: []
	W0318 13:54:15.222716    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:54:15.222769    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:54:15.232872    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:54:15.232890    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:54:15.232896    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:54:15.246076    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:54:15.246090    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:54:15.283080    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:54:15.283089    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:54:15.321881    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:54:15.321894    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:54:15.342796    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:54:15.342809    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:54:15.357534    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:54:15.357545    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:54:15.369238    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:54:15.369249    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:54:15.380748    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:54:15.380757    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:54:15.397881    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:54:15.397892    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:54:15.409867    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:54:15.409881    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:54:15.413913    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:54:15.413921    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:54:15.425281    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:54:15.425294    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:54:15.440344    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:54:15.440355    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:54:17.966956    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:54:22.968637    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:54:22.969087    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:54:23.009180    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:54:23.009327    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:54:23.031578    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:54:23.031688    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:54:23.049624    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:54:23.049698    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:54:23.062205    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:54:23.062272    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:54:23.072666    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:54:23.072735    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:54:23.083359    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:54:23.083424    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:54:23.093715    9750 logs.go:276] 0 containers: []
	W0318 13:54:23.093730    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:54:23.093781    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:54:23.104055    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:54:23.104071    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:54:23.104075    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:54:23.115890    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:54:23.115900    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:54:23.127260    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:54:23.127271    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:54:23.143990    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:54:23.144002    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:54:23.168712    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:54:23.168720    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:54:23.180635    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:54:23.180645    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:54:23.218864    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:54:23.218872    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:54:23.232693    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:54:23.232704    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:54:23.244261    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:54:23.244274    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:54:23.258823    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:54:23.258834    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:54:23.270333    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:54:23.270345    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:54:23.274546    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:54:23.274553    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:54:23.308095    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:54:23.308107    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:54:25.823333    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:54:30.825786    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:54:30.826347    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:54:30.862960    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:54:30.863056    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:54:30.883613    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:54:30.883686    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:54:30.899966    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:54:30.900020    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:54:30.912656    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:54:30.912714    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:54:30.924826    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:54:30.924904    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:54:30.936731    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:54:30.936766    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:54:30.949210    9750 logs.go:276] 0 containers: []
	W0318 13:54:30.949225    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:54:30.949283    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:54:30.961093    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:54:30.961111    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:54:30.961116    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:54:30.999985    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:54:30.999995    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:54:31.039646    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:54:31.039658    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:54:31.052902    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:54:31.052913    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:54:31.065740    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:54:31.065752    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:54:31.087295    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:54:31.087304    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:54:31.098806    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:54:31.098817    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:54:31.116250    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:54:31.116261    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:54:31.129712    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:54:31.129726    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:54:31.155747    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:54:31.155760    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:54:31.160299    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:54:31.160307    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:54:31.177686    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:54:31.177695    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:54:31.191538    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:54:31.191549    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:54:33.705136    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:54:38.706055    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:54:38.706173    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:54:38.718455    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:54:38.718530    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:54:38.742323    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:54:38.742401    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:54:38.754887    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:54:38.754958    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:54:38.767383    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:54:38.767449    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:54:38.778106    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:54:38.778165    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:54:38.789765    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:54:38.789830    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:54:38.799722    9750 logs.go:276] 0 containers: []
	W0318 13:54:38.799734    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:54:38.799785    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:54:38.810570    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:54:38.810590    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:54:38.810595    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:54:38.814954    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:54:38.814963    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:54:38.829380    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:54:38.829392    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:54:38.843927    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:54:38.843938    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:54:38.859321    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:54:38.859335    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:54:38.874354    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:54:38.874366    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:54:38.892660    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:54:38.892670    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:54:38.918195    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:54:38.918204    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:54:38.956242    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:54:38.956251    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:54:38.994192    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:54:38.994201    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:54:39.008933    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:54:39.008942    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:54:39.020092    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:54:39.020102    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:54:39.041684    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:54:39.041694    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:54:41.555858    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:54:46.558153    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:54:46.558506    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:54:46.593016    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:54:46.593139    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:54:46.612765    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:54:46.612858    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:54:46.626892    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:54:46.626951    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:54:46.645395    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:54:46.645446    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:54:46.656031    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:54:46.656093    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:54:46.666241    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:54:46.666301    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:54:46.676261    9750 logs.go:276] 0 containers: []
	W0318 13:54:46.676274    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:54:46.676319    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:54:46.686161    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:54:46.686176    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:54:46.686181    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:54:46.709248    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:54:46.709262    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:54:46.720370    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:54:46.720379    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:54:46.762693    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:54:46.762703    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:54:46.777311    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:54:46.777321    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:54:46.791297    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:54:46.791308    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:54:46.802958    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:54:46.802972    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:54:46.814312    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:54:46.814325    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:54:46.831579    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:54:46.831589    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:54:46.868023    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:54:46.868033    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:54:46.872230    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:54:46.872236    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:54:46.887036    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:54:46.887046    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:54:46.904205    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:54:46.904214    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:54:49.420487    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:54:54.421400    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:54:54.421643    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:54:54.449395    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:54:54.449507    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:54:54.467424    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:54:54.467499    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:54:54.480241    9750 logs.go:276] 2 containers: [5ef0c31bcb0a 27e00e6f1725]
	I0318 13:54:54.480315    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:54:54.491750    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:54:54.491812    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:54:54.501927    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:54:54.501988    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:54:54.512021    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:54:54.512074    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:54:54.522389    9750 logs.go:276] 0 containers: []
	W0318 13:54:54.522399    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:54:54.522458    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:54:54.532848    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:54:54.532862    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:54:54.532867    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:54:54.546923    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:54:54.546933    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:54:54.558485    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:54:54.558495    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:54:54.575162    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:54:54.575173    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:54:54.586205    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:54:54.586218    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:54:54.598379    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:54:54.598391    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:54:54.638003    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:54:54.638016    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:54:54.649754    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:54:54.649765    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:54:54.663993    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:54:54.664006    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:54:54.678018    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:54:54.678029    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:54:54.689625    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:54:54.689636    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:54:54.712619    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:54:54.712626    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:54:54.748508    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:54:54.748516    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:54:57.254017    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:55:02.254856    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:55:02.255309    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:55:02.295664    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:55:02.295797    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:55:02.316392    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:55:02.316487    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:55:02.333047    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:55:02.333128    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:55:02.345489    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:55:02.345557    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:55:02.355673    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:55:02.355731    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:55:02.365947    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:55:02.366007    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:55:02.376144    9750 logs.go:276] 0 containers: []
	W0318 13:55:02.376155    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:55:02.376211    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:55:02.386586    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:55:02.386601    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:55:02.386610    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:55:02.400331    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:55:02.400342    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:55:02.413802    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:55:02.413813    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:55:02.431330    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:55:02.431340    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:55:02.436132    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:55:02.436140    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:55:02.450753    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:55:02.450764    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:55:02.464767    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:55:02.464780    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:55:02.476052    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:55:02.476062    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:55:02.487191    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:55:02.487205    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:55:02.499747    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:55:02.499761    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:55:02.511641    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:55:02.511656    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:55:02.547328    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:55:02.547337    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:55:02.584921    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:55:02.584935    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:55:02.596998    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:55:02.597011    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:55:02.611575    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:55:02.611587    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:55:05.137527    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:55:10.140242    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:55:10.140582    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:55:10.180295    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:55:10.180423    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:55:10.201589    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:55:10.201689    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:55:10.217506    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:55:10.217578    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:55:10.230102    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:55:10.230165    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:55:10.241129    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:55:10.241197    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:55:10.251247    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:55:10.251310    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:55:10.261569    9750 logs.go:276] 0 containers: []
	W0318 13:55:10.261581    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:55:10.261636    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:55:10.276516    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:55:10.276530    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:55:10.276536    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:55:10.288979    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:55:10.288993    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:55:10.324508    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:55:10.324520    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:55:10.340421    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:55:10.340432    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:55:10.355275    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:55:10.355285    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:55:10.367067    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:55:10.367080    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:55:10.371762    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:55:10.371771    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:55:10.386495    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:55:10.386506    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:55:10.403804    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:55:10.403814    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:55:10.415490    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:55:10.415501    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:55:10.453095    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:55:10.453104    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:55:10.465536    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:55:10.465551    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:55:10.477682    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:55:10.477693    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:55:10.494970    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:55:10.494980    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:55:10.506676    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:55:10.506687    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:55:13.033571    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:55:18.036240    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:55:18.036619    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:55:18.067269    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:55:18.067389    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:55:18.087166    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:55:18.087266    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:55:18.101399    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:55:18.101465    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:55:18.112624    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:55:18.112688    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:55:18.123504    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:55:18.123560    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:55:18.134109    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:55:18.134173    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:55:18.144073    9750 logs.go:276] 0 containers: []
	W0318 13:55:18.144085    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:55:18.144132    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:55:18.154473    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:55:18.154490    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:55:18.154495    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:55:18.158989    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:55:18.158995    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:55:18.193209    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:55:18.193219    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:55:18.207128    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:55:18.207138    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:55:18.218698    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:55:18.218711    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:55:18.229821    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:55:18.229837    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:55:18.243594    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:55:18.243606    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:55:18.255625    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:55:18.255638    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:55:18.267638    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:55:18.267651    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:55:18.281984    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:55:18.281995    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:55:18.298831    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:55:18.298840    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:55:18.310149    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:55:18.310161    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:55:18.347832    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:55:18.347841    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:55:18.359977    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:55:18.359986    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:55:18.383911    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:55:18.383918    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:55:20.897799    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:55:25.900257    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:55:25.900635    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:55:25.938821    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:55:25.938936    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:55:25.958899    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:55:25.958980    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:55:25.977198    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:55:25.977269    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:55:25.991483    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:55:25.991536    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:55:26.002309    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:55:26.002377    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:55:26.012791    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:55:26.012854    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:55:26.028013    9750 logs.go:276] 0 containers: []
	W0318 13:55:26.028023    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:55:26.028069    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:55:26.038856    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:55:26.038871    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:55:26.038876    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:55:26.050875    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:55:26.050885    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:55:26.062781    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:55:26.062792    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:55:26.101453    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:55:26.101462    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:55:26.113579    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:55:26.113593    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:55:26.130678    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:55:26.130692    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:55:26.142564    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:55:26.142573    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:55:26.154054    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:55:26.154064    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:55:26.165204    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:55:26.165215    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:55:26.177188    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:55:26.177201    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:55:26.194378    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:55:26.194390    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:55:26.217915    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:55:26.217922    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:55:26.221818    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:55:26.221826    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:55:26.261200    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:55:26.261214    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:55:26.275544    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:55:26.275557    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:55:28.791873    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:55:33.794153    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:55:33.794233    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:55:33.805392    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:55:33.805458    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:55:33.820837    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:55:33.820902    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:55:33.831431    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:55:33.831498    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:55:33.845620    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:55:33.845671    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:55:33.856735    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:55:33.856813    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:55:33.867872    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:55:33.867942    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:55:33.877772    9750 logs.go:276] 0 containers: []
	W0318 13:55:33.877782    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:55:33.877837    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:55:33.888152    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:55:33.888169    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:55:33.888175    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:55:33.902039    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:55:33.902050    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:55:33.913831    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:55:33.913839    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:55:33.931024    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:55:33.931037    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:55:33.955762    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:55:33.955771    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:55:33.990601    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:55:33.990614    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:55:33.995058    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:55:33.995068    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:55:34.011932    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:55:34.011944    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:55:34.023732    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:55:34.023744    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:55:34.059962    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:55:34.059970    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:55:34.074617    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:55:34.074627    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:55:34.086059    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:55:34.086070    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:55:34.098074    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:55:34.098085    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:55:34.113984    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:55:34.113995    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:55:34.128063    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:55:34.128076    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:55:36.646091    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:55:41.648934    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:55:41.649355    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:55:41.688925    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:55:41.689049    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:55:41.711570    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:55:41.711665    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:55:41.727144    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:55:41.727213    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:55:41.742301    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:55:41.742362    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:55:41.757422    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:55:41.757488    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:55:41.767942    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:55:41.768003    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:55:41.778085    9750 logs.go:276] 0 containers: []
	W0318 13:55:41.778094    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:55:41.778139    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:55:41.788204    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:55:41.788221    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:55:41.788226    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:55:41.800340    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:55:41.800351    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:55:41.814315    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:55:41.814326    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:55:41.851257    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:55:41.851266    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:55:41.855554    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:55:41.855562    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:55:41.867907    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:55:41.867919    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:55:41.886832    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:55:41.886843    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:55:41.911071    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:55:41.911077    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:55:41.922491    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:55:41.922504    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:55:41.934595    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:55:41.934607    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:55:41.946289    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:55:41.946300    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:55:41.966534    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:55:41.966545    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:55:41.978235    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:55:41.978244    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:55:42.020556    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:55:42.020573    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:55:42.034438    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:55:42.034451    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:55:44.549934    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:55:49.552228    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:55:49.552688    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:55:49.591417    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:55:49.591552    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:55:49.612296    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:55:49.612399    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:55:49.628137    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:55:49.628213    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:55:49.640753    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:55:49.640822    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:55:49.651856    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:55:49.651923    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:55:49.663003    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:55:49.663069    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:55:49.673597    9750 logs.go:276] 0 containers: []
	W0318 13:55:49.673609    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:55:49.673665    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:55:49.684058    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:55:49.684075    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:55:49.684081    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:55:49.703288    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:55:49.703299    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:55:49.727082    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:55:49.727090    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:55:49.742475    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:55:49.742484    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:55:49.760174    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:55:49.760185    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:55:49.771858    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:55:49.771870    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:55:49.789237    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:55:49.789248    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:55:49.793324    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:55:49.793332    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:55:49.805451    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:55:49.805464    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:55:49.817909    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:55:49.817922    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:55:49.829540    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:55:49.829551    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:55:49.866481    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:55:49.866490    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:55:49.878258    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:55:49.878267    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:55:49.890102    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:55:49.890114    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:55:49.925936    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:55:49.925949    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:55:52.442848    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:55:57.445594    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:55:57.445688    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:55:57.460952    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:55:57.460999    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:55:57.472399    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:55:57.472455    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:55:57.484927    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:55:57.484997    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:55:57.499678    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:55:57.499730    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:55:57.511518    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:55:57.511575    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:55:57.523246    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:55:57.523310    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:55:57.535402    9750 logs.go:276] 0 containers: []
	W0318 13:55:57.535412    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:55:57.535461    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:55:57.550198    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:55:57.550211    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:55:57.550216    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:55:57.565896    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:55:57.565907    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:55:57.581714    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:55:57.581727    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:55:57.596847    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:55:57.596857    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:55:57.614326    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:55:57.614335    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:55:57.631686    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:55:57.631699    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:55:57.651430    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:55:57.651442    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:55:57.665171    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:55:57.665182    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:55:57.690201    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:55:57.690212    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:55:57.705657    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:55:57.705667    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:55:57.744685    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:55:57.744698    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:55:57.750021    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:55:57.750033    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:55:57.767230    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:55:57.767238    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:55:57.807294    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:55:57.807303    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:55:57.820650    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:55:57.820660    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:56:00.338507    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:56:05.341038    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:56:05.341156    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:56:05.354707    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:56:05.354781    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:56:05.371896    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:56:05.371960    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:56:05.385327    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:56:05.385394    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:56:05.398070    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:56:05.398148    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:56:05.412997    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:56:05.413075    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:56:05.426151    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:56:05.426219    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:56:05.437894    9750 logs.go:276] 0 containers: []
	W0318 13:56:05.437906    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:56:05.437960    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:56:05.451189    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:56:05.451207    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:56:05.451213    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:56:05.490264    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:56:05.490274    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:56:05.505743    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:56:05.505753    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:56:05.519535    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:56:05.519546    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:56:05.543573    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:56:05.543584    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:56:05.580981    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:56:05.580993    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:56:05.604087    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:56:05.604097    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:56:05.616217    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:56:05.616229    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:56:05.633828    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:56:05.633837    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:56:05.645991    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:56:05.646001    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:56:05.657929    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:56:05.657942    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:56:05.662643    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:56:05.662650    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:56:05.677252    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:56:05.677262    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:56:05.689622    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:56:05.689632    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:56:05.701929    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:56:05.701940    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:56:08.214532    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:56:13.216818    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:56:13.217213    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:56:13.259257    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:56:13.259377    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:56:13.281748    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:56:13.282781    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:56:13.301569    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:56:13.301636    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:56:13.313469    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:56:13.313520    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:56:13.324556    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:56:13.324619    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:56:13.337687    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:56:13.337757    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:56:13.353141    9750 logs.go:276] 0 containers: []
	W0318 13:56:13.353151    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:56:13.353205    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:56:13.363858    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:56:13.363875    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:56:13.363880    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:56:13.386728    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:56:13.386736    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:56:13.423610    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:56:13.423617    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:56:13.427834    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:56:13.427843    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:56:13.442365    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:56:13.442374    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:56:13.462502    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:56:13.462511    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:56:13.474865    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:56:13.474878    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:56:13.489527    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:56:13.489537    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:56:13.508273    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:56:13.508283    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:56:13.522913    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:56:13.522921    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:56:13.537684    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:56:13.537694    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:56:13.555746    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:56:13.555756    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:56:13.567717    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:56:13.567727    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:56:13.579688    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:56:13.579698    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:56:13.615804    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:56:13.615814    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:56:16.130666    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:56:21.133351    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:56:21.133438    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:56:21.152775    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:56:21.152847    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:56:21.172862    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:56:21.172915    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:56:21.188950    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:56:21.189025    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:56:21.212053    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:56:21.212111    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:56:21.224287    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:56:21.224346    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:56:21.237260    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:56:21.237315    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:56:21.250049    9750 logs.go:276] 0 containers: []
	W0318 13:56:21.250063    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:56:21.250125    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:56:21.262907    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:56:21.262922    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:56:21.262926    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:56:21.279267    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:56:21.279280    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:56:21.283771    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:56:21.283786    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:56:21.306449    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:56:21.306459    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:56:21.345866    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:56:21.345884    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:56:21.385575    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:56:21.385587    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:56:21.409720    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:56:21.409741    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:56:21.423180    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:56:21.423191    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:56:21.439558    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:56:21.439569    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:56:21.460709    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:56:21.460721    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:56:21.474859    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:56:21.474870    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:56:21.488659    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:56:21.488670    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:56:21.509366    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:56:21.509375    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:56:21.522122    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:56:21.522134    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:56:21.538093    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:56:21.538104    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:56:24.056592    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:56:29.059126    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:56:29.059576    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:56:29.100458    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:56:29.100593    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:56:29.121931    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:56:29.122045    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:56:29.138276    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:56:29.138350    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:56:29.152611    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:56:29.152680    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:56:29.163403    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:56:29.163470    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:56:29.174511    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:56:29.174566    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:56:29.185342    9750 logs.go:276] 0 containers: []
	W0318 13:56:29.185356    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:56:29.185411    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:56:29.196227    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:56:29.196244    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:56:29.196249    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:56:29.219764    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:56:29.219776    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:56:29.237963    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:56:29.237972    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:56:29.249523    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:56:29.249534    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:56:29.263828    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:56:29.263839    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:56:29.275973    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:56:29.275983    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:56:29.291858    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:56:29.291869    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:56:29.314689    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:56:29.314699    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:56:29.326198    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:56:29.326209    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:56:29.362828    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:56:29.362840    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:56:29.405572    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:56:29.405586    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:56:29.417136    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:56:29.417151    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:56:29.428374    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:56:29.428385    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:56:29.432408    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:56:29.432413    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:56:29.443816    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:56:29.443829    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:56:31.965790    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:56:36.968528    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:56:36.968962    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:56:37.008989    9750 logs.go:276] 1 containers: [f4d422781b66]
	I0318 13:56:37.009110    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:56:37.030114    9750 logs.go:276] 1 containers: [05269ef81ef0]
	I0318 13:56:37.030221    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:56:37.046518    9750 logs.go:276] 4 containers: [332d222c0bbb 1b02f9dbe0ea 5ef0c31bcb0a 27e00e6f1725]
	I0318 13:56:37.046587    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:56:37.058887    9750 logs.go:276] 1 containers: [0002ddb3bb0b]
	I0318 13:56:37.058945    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:56:37.069848    9750 logs.go:276] 1 containers: [7f93d9e1ed7a]
	I0318 13:56:37.069921    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:56:37.080339    9750 logs.go:276] 1 containers: [f535ec7768a5]
	I0318 13:56:37.080397    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:56:37.090329    9750 logs.go:276] 0 containers: []
	W0318 13:56:37.090340    9750 logs.go:278] No container was found matching "kindnet"
	I0318 13:56:37.090387    9750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 13:56:37.102035    9750 logs.go:276] 1 containers: [2cf8842023ea]
	I0318 13:56:37.102053    9750 logs.go:123] Gathering logs for kube-apiserver [f4d422781b66] ...
	I0318 13:56:37.102059    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4d422781b66"
	I0318 13:56:37.116738    9750 logs.go:123] Gathering logs for etcd [05269ef81ef0] ...
	I0318 13:56:37.116750    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05269ef81ef0"
	I0318 13:56:37.135578    9750 logs.go:123] Gathering logs for kubelet ...
	I0318 13:56:37.135590    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:56:37.171660    9750 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:56:37.171669    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:56:37.205467    9750 logs.go:123] Gathering logs for coredns [332d222c0bbb] ...
	I0318 13:56:37.205478    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332d222c0bbb"
	I0318 13:56:37.224723    9750 logs.go:123] Gathering logs for coredns [1b02f9dbe0ea] ...
	I0318 13:56:37.224733    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b02f9dbe0ea"
	I0318 13:56:37.236378    9750 logs.go:123] Gathering logs for kube-scheduler [0002ddb3bb0b] ...
	I0318 13:56:37.236389    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0002ddb3bb0b"
	I0318 13:56:37.250723    9750 logs.go:123] Gathering logs for kube-proxy [7f93d9e1ed7a] ...
	I0318 13:56:37.250733    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f93d9e1ed7a"
	I0318 13:56:37.262349    9750 logs.go:123] Gathering logs for dmesg ...
	I0318 13:56:37.262358    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:56:37.267167    9750 logs.go:123] Gathering logs for container status ...
	I0318 13:56:37.267173    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:56:37.279100    9750 logs.go:123] Gathering logs for coredns [5ef0c31bcb0a] ...
	I0318 13:56:37.279109    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef0c31bcb0a"
	I0318 13:56:37.294759    9750 logs.go:123] Gathering logs for coredns [27e00e6f1725] ...
	I0318 13:56:37.294767    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e00e6f1725"
	I0318 13:56:37.306346    9750 logs.go:123] Gathering logs for kube-controller-manager [f535ec7768a5] ...
	I0318 13:56:37.306356    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f535ec7768a5"
	I0318 13:56:37.323227    9750 logs.go:123] Gathering logs for storage-provisioner [2cf8842023ea] ...
	I0318 13:56:37.323235    9750 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf8842023ea"
	I0318 13:56:37.340443    9750 logs.go:123] Gathering logs for Docker ...
	I0318 13:56:37.340454    9750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:56:39.867019    9750 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 13:56:44.869360    9750 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 13:56:44.875598    9750 out.go:177] 
	W0318 13:56:44.881535    9750 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 13:56:44.881552    9750 out.go:239] * 
	* 
	W0318 13:56:44.882014    9750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:56:44.900569    9750 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-813000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (579.45s)

                                                
                                    
x
+
TestPause/serial/Start (9.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-636000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-636000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.7635985s)

                                                
                                                
-- stdout --
	* [pause-636000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-636000" primary control-plane node in "pause-636000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-636000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-636000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-636000 -n pause-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-636000 -n pause-636000: exit status 7 (56.247583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-170000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-170000 --driver=qemu2 : exit status 80 (9.912161291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-170000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-170000" primary control-plane node in "NoKubernetes-170000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-170000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-170000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-170000 -n NoKubernetes-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-170000 -n NoKubernetes-170000: exit status 7 (67.013083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-170000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-170000 --no-kubernetes --driver=qemu2 : exit status 80 (5.831644667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-170000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-170000
	* Restarting existing qemu2 VM for "NoKubernetes-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-170000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-170000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-170000 -n NoKubernetes-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-170000 -n NoKubernetes-170000: exit status 7 (34.510083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-170000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-170000 --no-kubernetes --driver=qemu2 : exit status 80 (5.849319917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-170000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-170000
	* Restarting existing qemu2 VM for "NoKubernetes-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-170000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-170000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-170000 -n NoKubernetes-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-170000 -n NoKubernetes-170000: exit status 7 (66.147791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-170000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-170000 --driver=qemu2 : exit status 80 (5.867913208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-170000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-170000
	* Restarting existing qemu2 VM for "NoKubernetes-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-170000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-170000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-170000 -n NoKubernetes-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-170000 -n NoKubernetes-170000: exit status 7 (69.778666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.776792458s)

                                                
                                                
-- stdout --
	* [auto-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-099000" primary control-plane node in "auto-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:55:33.496794   10013 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:55:33.496918   10013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:55:33.496921   10013 out.go:304] Setting ErrFile to fd 2...
	I0318 13:55:33.496924   10013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:55:33.497046   10013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:55:33.498161   10013 out.go:298] Setting JSON to false
	I0318 13:55:33.514454   10013 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6905,"bootTime":1710788428,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:55:33.514515   10013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:55:33.520880   10013 out.go:177] * [auto-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:55:33.528881   10013 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:55:33.528962   10013 notify.go:220] Checking for updates...
	I0318 13:55:33.535855   10013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:55:33.538851   10013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:55:33.541861   10013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:55:33.544846   10013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:55:33.547866   10013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:55:33.551265   10013 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:55:33.551334   10013 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:55:33.551378   10013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:55:33.554770   10013 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:55:33.561864   10013 start.go:297] selected driver: qemu2
	I0318 13:55:33.561869   10013 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:55:33.561875   10013 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:55:33.564161   10013 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:55:33.565795   10013 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:55:33.568968   10013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:33.569023   10013 cni.go:84] Creating CNI manager for ""
	I0318 13:55:33.569034   10013 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:55:33.569038   10013 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:55:33.569065   10013 start.go:340] cluster config:
	{Name:auto-099000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:55:33.573410   10013 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:55:33.580816   10013 out.go:177] * Starting "auto-099000" primary control-plane node in "auto-099000" cluster
	I0318 13:55:33.584832   10013 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:55:33.584845   10013 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:55:33.584851   10013 cache.go:56] Caching tarball of preloaded images
	I0318 13:55:33.584900   10013 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:55:33.584906   10013 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:55:33.584963   10013 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/auto-099000/config.json ...
	I0318 13:55:33.584980   10013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/auto-099000/config.json: {Name:mkae9eac621054500c4ad891c61fd173456bad0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:33.585197   10013 start.go:360] acquireMachinesLock for auto-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:55:33.585228   10013 start.go:364] duration metric: took 25.792µs to acquireMachinesLock for "auto-099000"
	I0318 13:55:33.585241   10013 start.go:93] Provisioning new machine with config: &{Name:auto-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:55:33.585268   10013 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:55:33.593838   10013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:55:33.609835   10013 start.go:159] libmachine.API.Create for "auto-099000" (driver="qemu2")
	I0318 13:55:33.609863   10013 client.go:168] LocalClient.Create starting
	I0318 13:55:33.609929   10013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:55:33.609965   10013 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:33.609975   10013 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:33.610021   10013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:55:33.610042   10013 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:33.610051   10013 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:33.610419   10013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:55:33.754365   10013 main.go:141] libmachine: Creating SSH key...
	I0318 13:55:33.834224   10013 main.go:141] libmachine: Creating Disk image...
	I0318 13:55:33.834235   10013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:55:33.834458   10013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2
	I0318 13:55:33.848017   10013 main.go:141] libmachine: STDOUT: 
	I0318 13:55:33.848043   10013 main.go:141] libmachine: STDERR: 
	I0318 13:55:33.848096   10013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2 +20000M
	I0318 13:55:33.860740   10013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:55:33.860762   10013 main.go:141] libmachine: STDERR: 
	I0318 13:55:33.860778   10013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2
	I0318 13:55:33.860785   10013 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:55:33.860826   10013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:27:33:58:b2:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2
	I0318 13:55:33.863073   10013 main.go:141] libmachine: STDOUT: 
	I0318 13:55:33.863092   10013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:55:33.863116   10013 client.go:171] duration metric: took 253.248458ms to LocalClient.Create
	I0318 13:55:35.865411   10013 start.go:128] duration metric: took 2.2801075s to createHost
	I0318 13:55:35.865507   10013 start.go:83] releasing machines lock for "auto-099000", held for 2.280279958s
	W0318 13:55:35.865642   10013 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:55:35.876884   10013 out.go:177] * Deleting "auto-099000" in qemu2 ...
	W0318 13:55:35.907723   10013 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:55:35.907768   10013 start.go:728] Will try again in 5 seconds ...
	I0318 13:55:40.910146   10013 start.go:360] acquireMachinesLock for auto-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:55:40.910601   10013 start.go:364] duration metric: took 295.542µs to acquireMachinesLock for "auto-099000"
	I0318 13:55:40.910840   10013 start.go:93] Provisioning new machine with config: &{Name:auto-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:55:40.911004   10013 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:55:40.915616   10013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:55:40.958983   10013 start.go:159] libmachine.API.Create for "auto-099000" (driver="qemu2")
	I0318 13:55:40.959037   10013 client.go:168] LocalClient.Create starting
	I0318 13:55:40.959149   10013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:55:40.959216   10013 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:40.959233   10013 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:40.959294   10013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:55:40.959339   10013 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:40.959348   10013 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:40.959906   10013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:55:41.110809   10013 main.go:141] libmachine: Creating SSH key...
	I0318 13:55:41.173088   10013 main.go:141] libmachine: Creating Disk image...
	I0318 13:55:41.173097   10013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:55:41.173285   10013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2
	I0318 13:55:41.186086   10013 main.go:141] libmachine: STDOUT: 
	I0318 13:55:41.186112   10013 main.go:141] libmachine: STDERR: 
	I0318 13:55:41.186168   10013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2 +20000M
	I0318 13:55:41.196791   10013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:55:41.196809   10013 main.go:141] libmachine: STDERR: 
	I0318 13:55:41.196823   10013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2
	I0318 13:55:41.196828   10013 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:55:41.196870   10013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:e9:2f:35:8e:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/auto-099000/disk.qcow2
	I0318 13:55:41.198611   10013 main.go:141] libmachine: STDOUT: 
	I0318 13:55:41.198636   10013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:55:41.198649   10013 client.go:171] duration metric: took 239.608791ms to LocalClient.Create
	I0318 13:55:43.200835   10013 start.go:128] duration metric: took 2.289815792s to createHost
	I0318 13:55:43.200907   10013 start.go:83] releasing machines lock for "auto-099000", held for 2.29015125s
	W0318 13:55:43.201289   10013 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:55:43.213927   10013 out.go:177] 
	W0318 13:55:43.218245   10013 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:55:43.218320   10013 out.go:239] * 
	* 
	W0318 13:55:43.220969   10013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:55:43.227911   10013 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.807103667s)

                                                
                                                
-- stdout --
	* [kindnet-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-099000" primary control-plane node in "kindnet-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:55:45.556804   10125 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:55:45.556918   10125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:55:45.556921   10125 out.go:304] Setting ErrFile to fd 2...
	I0318 13:55:45.556923   10125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:55:45.557036   10125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:55:45.558057   10125 out.go:298] Setting JSON to false
	I0318 13:55:45.574451   10125 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6917,"bootTime":1710788428,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:55:45.574525   10125 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:55:45.579665   10125 out.go:177] * [kindnet-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:55:45.586549   10125 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:55:45.586609   10125 notify.go:220] Checking for updates...
	I0318 13:55:45.593503   10125 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:55:45.596500   10125 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:55:45.599549   10125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:55:45.602484   10125 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:55:45.605581   10125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:55:45.608903   10125 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:55:45.608971   10125 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:55:45.609023   10125 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:55:45.613510   10125 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:55:45.620536   10125 start.go:297] selected driver: qemu2
	I0318 13:55:45.620543   10125 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:55:45.620549   10125 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:55:45.622817   10125 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:55:45.626490   10125 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:55:45.629617   10125 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:45.629670   10125 cni.go:84] Creating CNI manager for "kindnet"
	I0318 13:55:45.629680   10125 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 13:55:45.629709   10125 start.go:340] cluster config:
	{Name:kindnet-099000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:55:45.634221   10125 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:55:45.641527   10125 out.go:177] * Starting "kindnet-099000" primary control-plane node in "kindnet-099000" cluster
	I0318 13:55:45.645505   10125 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:55:45.645521   10125 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:55:45.645528   10125 cache.go:56] Caching tarball of preloaded images
	I0318 13:55:45.645576   10125 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:55:45.645582   10125 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:55:45.645633   10125 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/kindnet-099000/config.json ...
	I0318 13:55:45.645645   10125 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/kindnet-099000/config.json: {Name:mk79d561f4c727edac18a2edce0051e81ce3125c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:45.645844   10125 start.go:360] acquireMachinesLock for kindnet-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:55:45.645875   10125 start.go:364] duration metric: took 25.584µs to acquireMachinesLock for "kindnet-099000"
	I0318 13:55:45.645888   10125 start.go:93] Provisioning new machine with config: &{Name:kindnet-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:55:45.645921   10125 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:55:45.654525   10125 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:55:45.668543   10125 start.go:159] libmachine.API.Create for "kindnet-099000" (driver="qemu2")
	I0318 13:55:45.668571   10125 client.go:168] LocalClient.Create starting
	I0318 13:55:45.668627   10125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:55:45.668667   10125 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:45.668681   10125 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:45.668728   10125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:55:45.668749   10125 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:45.668754   10125 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:45.669086   10125 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:55:45.811442   10125 main.go:141] libmachine: Creating SSH key...
	I0318 13:55:45.929700   10125 main.go:141] libmachine: Creating Disk image...
	I0318 13:55:45.929710   10125 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:55:45.929908   10125 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2
	I0318 13:55:45.942768   10125 main.go:141] libmachine: STDOUT: 
	I0318 13:55:45.942794   10125 main.go:141] libmachine: STDERR: 
	I0318 13:55:45.942856   10125 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2 +20000M
	I0318 13:55:45.954059   10125 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:55:45.954081   10125 main.go:141] libmachine: STDERR: 
	I0318 13:55:45.954097   10125 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2
	I0318 13:55:45.954102   10125 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:55:45.954131   10125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:1a:b8:1b:1c:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2
	I0318 13:55:45.955977   10125 main.go:141] libmachine: STDOUT: 
	I0318 13:55:45.955991   10125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:55:45.956014   10125 client.go:171] duration metric: took 287.439209ms to LocalClient.Create
	I0318 13:55:47.958306   10125 start.go:128] duration metric: took 2.312367417s to createHost
	I0318 13:55:47.958408   10125 start.go:83] releasing machines lock for "kindnet-099000", held for 2.312535917s
	W0318 13:55:47.958474   10125 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:55:47.970474   10125 out.go:177] * Deleting "kindnet-099000" in qemu2 ...
	W0318 13:55:47.997182   10125 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:55:47.997218   10125 start.go:728] Will try again in 5 seconds ...
	I0318 13:55:52.999449   10125 start.go:360] acquireMachinesLock for kindnet-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:55:52.999964   10125 start.go:364] duration metric: took 417.917µs to acquireMachinesLock for "kindnet-099000"
	I0318 13:55:53.000033   10125 start.go:93] Provisioning new machine with config: &{Name:kindnet-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:55:53.000289   10125 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:55:53.008891   10125 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:55:53.050299   10125 start.go:159] libmachine.API.Create for "kindnet-099000" (driver="qemu2")
	I0318 13:55:53.050351   10125 client.go:168] LocalClient.Create starting
	I0318 13:55:53.050455   10125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:55:53.050519   10125 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:53.050533   10125 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:53.050587   10125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:55:53.050623   10125 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:53.050635   10125 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:53.051124   10125 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:55:53.202416   10125 main.go:141] libmachine: Creating SSH key...
	I0318 13:55:53.259517   10125 main.go:141] libmachine: Creating Disk image...
	I0318 13:55:53.259522   10125 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:55:53.259708   10125 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2
	I0318 13:55:53.272628   10125 main.go:141] libmachine: STDOUT: 
	I0318 13:55:53.272651   10125 main.go:141] libmachine: STDERR: 
	I0318 13:55:53.272707   10125 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2 +20000M
	I0318 13:55:53.288430   10125 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:55:53.288453   10125 main.go:141] libmachine: STDERR: 
	I0318 13:55:53.288467   10125 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2
	I0318 13:55:53.288472   10125 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:55:53.288503   10125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:98:e7:dd:69:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kindnet-099000/disk.qcow2
	I0318 13:55:53.290301   10125 main.go:141] libmachine: STDOUT: 
	I0318 13:55:53.290316   10125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:55:53.290328   10125 client.go:171] duration metric: took 239.972833ms to LocalClient.Create
	I0318 13:55:55.292513   10125 start.go:128] duration metric: took 2.292200916s to createHost
	I0318 13:55:55.292614   10125 start.go:83] releasing machines lock for "kindnet-099000", held for 2.292638042s
	W0318 13:55:55.293008   10125 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:55:55.305804   10125 out.go:177] 
	W0318 13:55:55.308754   10125 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:55:55.308773   10125 out.go:239] * 
	* 
	W0318 13:55:55.310837   10125 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:55:55.318744   10125 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.928394625s)

                                                
                                                
-- stdout --
	* [calico-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-099000" primary control-plane node in "calico-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:55:57.739445   10245 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:55:57.739578   10245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:55:57.739582   10245 out.go:304] Setting ErrFile to fd 2...
	I0318 13:55:57.739584   10245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:55:57.739718   10245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:55:57.740889   10245 out.go:298] Setting JSON to false
	I0318 13:55:57.759159   10245 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6929,"bootTime":1710788428,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:55:57.759232   10245 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:55:57.763210   10245 out.go:177] * [calico-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:55:57.770138   10245 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:55:57.770278   10245 notify.go:220] Checking for updates...
	I0318 13:55:57.773164   10245 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:55:57.777123   10245 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:55:57.780118   10245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:55:57.784117   10245 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:55:57.787062   10245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:55:57.790522   10245 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:55:57.790584   10245 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:55:57.790639   10245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:55:57.795111   10245 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:55:57.802113   10245 start.go:297] selected driver: qemu2
	I0318 13:55:57.802123   10245 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:55:57.802129   10245 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:55:57.804698   10245 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:55:57.808047   10245 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:55:57.811193   10245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:57.811239   10245 cni.go:84] Creating CNI manager for "calico"
	I0318 13:55:57.811252   10245 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0318 13:55:57.811288   10245 start.go:340] cluster config:
	{Name:calico-099000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:55:57.816084   10245 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:55:57.823091   10245 out.go:177] * Starting "calico-099000" primary control-plane node in "calico-099000" cluster
	I0318 13:55:57.827136   10245 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:55:57.827174   10245 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:55:57.827185   10245 cache.go:56] Caching tarball of preloaded images
	I0318 13:55:57.827268   10245 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:55:57.827277   10245 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:55:57.827339   10245 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/calico-099000/config.json ...
	I0318 13:55:57.827351   10245 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/calico-099000/config.json: {Name:mk509d831543f243a620f6e58e790ad4e15f2b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:57.827653   10245 start.go:360] acquireMachinesLock for calico-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:55:57.827681   10245 start.go:364] duration metric: took 23.25µs to acquireMachinesLock for "calico-099000"
	I0318 13:55:57.827694   10245 start.go:93] Provisioning new machine with config: &{Name:calico-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:55:57.827728   10245 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:55:57.831082   10245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:55:57.846171   10245 start.go:159] libmachine.API.Create for "calico-099000" (driver="qemu2")
	I0318 13:55:57.846194   10245 client.go:168] LocalClient.Create starting
	I0318 13:55:57.846266   10245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:55:57.846296   10245 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:57.846307   10245 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:57.846355   10245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:55:57.846376   10245 main.go:141] libmachine: Decoding PEM data...
	I0318 13:55:57.846387   10245 main.go:141] libmachine: Parsing certificate...
	I0318 13:55:57.846750   10245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:55:57.988098   10245 main.go:141] libmachine: Creating SSH key...
	I0318 13:55:58.158236   10245 main.go:141] libmachine: Creating Disk image...
	I0318 13:55:58.158252   10245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:55:58.158442   10245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2
	I0318 13:55:58.171024   10245 main.go:141] libmachine: STDOUT: 
	I0318 13:55:58.171043   10245 main.go:141] libmachine: STDERR: 
	I0318 13:55:58.171095   10245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2 +20000M
	I0318 13:55:58.182164   10245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:55:58.182197   10245 main.go:141] libmachine: STDERR: 
	I0318 13:55:58.182216   10245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2
	I0318 13:55:58.182220   10245 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:55:58.182252   10245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:7f:22:cf:ad:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2
	I0318 13:55:58.184306   10245 main.go:141] libmachine: STDOUT: 
	I0318 13:55:58.184326   10245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:55:58.184344   10245 client.go:171] duration metric: took 338.146833ms to LocalClient.Create
	I0318 13:56:00.185920   10245 start.go:128] duration metric: took 2.358179958s to createHost
	I0318 13:56:00.186015   10245 start.go:83] releasing machines lock for "calico-099000", held for 2.358336042s
	W0318 13:56:00.186116   10245 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:00.199243   10245 out.go:177] * Deleting "calico-099000" in qemu2 ...
	W0318 13:56:00.219472   10245 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:00.219501   10245 start.go:728] Will try again in 5 seconds ...
	I0318 13:56:05.221914   10245 start.go:360] acquireMachinesLock for calico-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:05.222349   10245 start.go:364] duration metric: took 344.459µs to acquireMachinesLock for "calico-099000"
	I0318 13:56:05.222498   10245 start.go:93] Provisioning new machine with config: &{Name:calico-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:05.222734   10245 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:05.227940   10245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:05.274453   10245 start.go:159] libmachine.API.Create for "calico-099000" (driver="qemu2")
	I0318 13:56:05.274511   10245 client.go:168] LocalClient.Create starting
	I0318 13:56:05.274621   10245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:05.274686   10245 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:05.274703   10245 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:05.274766   10245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:05.274809   10245 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:05.274821   10245 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:05.275357   10245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:05.427112   10245 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:05.557834   10245 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:05.557845   10245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:05.558076   10245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2
	I0318 13:56:05.571797   10245 main.go:141] libmachine: STDOUT: 
	I0318 13:56:05.571830   10245 main.go:141] libmachine: STDERR: 
	I0318 13:56:05.571897   10245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2 +20000M
	I0318 13:56:05.584472   10245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:05.584504   10245 main.go:141] libmachine: STDERR: 
	I0318 13:56:05.584520   10245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2
	I0318 13:56:05.584526   10245 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:05.584554   10245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f7:86:c9:f6:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/calico-099000/disk.qcow2
	I0318 13:56:05.586715   10245 main.go:141] libmachine: STDOUT: 
	I0318 13:56:05.586732   10245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:05.586744   10245 client.go:171] duration metric: took 312.228666ms to LocalClient.Create
	I0318 13:56:07.588939   10245 start.go:128] duration metric: took 2.366176834s to createHost
	I0318 13:56:07.589009   10245 start.go:83] releasing machines lock for "calico-099000", held for 2.366648208s
	W0318 13:56:07.589497   10245 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:07.602016   10245 out.go:177] 
	W0318 13:56:07.606121   10245 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:56:07.606172   10245 out.go:239] * 
	* 
	W0318 13:56:07.608549   10245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:56:07.620913   10245 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.755239792s)

                                                
                                                
-- stdout --
	* [custom-flannel-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-099000" primary control-plane node in "custom-flannel-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:56:10.142472   10363 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:56:10.142600   10363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:10.142603   10363 out.go:304] Setting ErrFile to fd 2...
	I0318 13:56:10.142606   10363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:10.142730   10363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:56:10.143803   10363 out.go:298] Setting JSON to false
	I0318 13:56:10.159943   10363 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6942,"bootTime":1710788428,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:56:10.160011   10363 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:56:10.165938   10363 out.go:177] * [custom-flannel-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:56:10.172840   10363 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:56:10.176902   10363 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:56:10.172903   10363 notify.go:220] Checking for updates...
	I0318 13:56:10.180799   10363 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:56:10.183805   10363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:56:10.187822   10363 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:56:10.190787   10363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:56:10.194132   10363 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:56:10.194197   10363 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:56:10.194255   10363 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:56:10.198817   10363 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:56:10.205842   10363 start.go:297] selected driver: qemu2
	I0318 13:56:10.205847   10363 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:56:10.205852   10363 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:56:10.208072   10363 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:56:10.211807   10363 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:56:10.214895   10363 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:56:10.214910   10363 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0318 13:56:10.214917   10363 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0318 13:56:10.214946   10363 start.go:340] cluster config:
	{Name:custom-flannel-099000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:56:10.219394   10363 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:56:10.226745   10363 out.go:177] * Starting "custom-flannel-099000" primary control-plane node in "custom-flannel-099000" cluster
	I0318 13:56:10.230791   10363 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:56:10.230807   10363 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:56:10.230814   10363 cache.go:56] Caching tarball of preloaded images
	I0318 13:56:10.230870   10363 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:56:10.230882   10363 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:56:10.230942   10363 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/custom-flannel-099000/config.json ...
	I0318 13:56:10.230952   10363 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/custom-flannel-099000/config.json: {Name:mkea2fce5ae0736c71561311e77556b8e504b9fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:56:10.231157   10363 start.go:360] acquireMachinesLock for custom-flannel-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:10.231192   10363 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "custom-flannel-099000"
	I0318 13:56:10.231204   10363 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:10.231232   10363 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:10.238798   10363 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:10.255780   10363 start.go:159] libmachine.API.Create for "custom-flannel-099000" (driver="qemu2")
	I0318 13:56:10.255810   10363 client.go:168] LocalClient.Create starting
	I0318 13:56:10.255890   10363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:10.255920   10363 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:10.255928   10363 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:10.255976   10363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:10.255997   10363 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:10.256002   10363 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:10.256359   10363 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:10.397014   10363 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:10.447604   10363 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:10.447610   10363 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:10.447786   10363 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2
	I0318 13:56:10.459986   10363 main.go:141] libmachine: STDOUT: 
	I0318 13:56:10.460010   10363 main.go:141] libmachine: STDERR: 
	I0318 13:56:10.460066   10363 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2 +20000M
	I0318 13:56:10.471710   10363 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:10.471730   10363 main.go:141] libmachine: STDERR: 
	I0318 13:56:10.471743   10363 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2
	I0318 13:56:10.471747   10363 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:10.471777   10363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:71:73:d5:e0:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2
	I0318 13:56:10.473549   10363 main.go:141] libmachine: STDOUT: 
	I0318 13:56:10.473566   10363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:10.473596   10363 client.go:171] duration metric: took 217.781833ms to LocalClient.Create
	I0318 13:56:12.475722   10363 start.go:128] duration metric: took 2.2444895s to createHost
	I0318 13:56:12.475802   10363 start.go:83] releasing machines lock for "custom-flannel-099000", held for 2.244615292s
	W0318 13:56:12.475839   10363 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:12.492810   10363 out.go:177] * Deleting "custom-flannel-099000" in qemu2 ...
	W0318 13:56:12.513403   10363 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:12.513417   10363 start.go:728] Will try again in 5 seconds ...
	I0318 13:56:17.515735   10363 start.go:360] acquireMachinesLock for custom-flannel-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:17.516153   10363 start.go:364] duration metric: took 312.167µs to acquireMachinesLock for "custom-flannel-099000"
	I0318 13:56:17.516281   10363 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:17.516533   10363 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:17.522677   10363 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:17.568965   10363 start.go:159] libmachine.API.Create for "custom-flannel-099000" (driver="qemu2")
	I0318 13:56:17.569017   10363 client.go:168] LocalClient.Create starting
	I0318 13:56:17.569140   10363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:17.569200   10363 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:17.569213   10363 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:17.569286   10363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:17.569341   10363 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:17.569354   10363 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:17.569848   10363 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:17.721601   10363 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:17.800773   10363 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:17.800781   10363 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:17.800953   10363 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2
	I0318 13:56:17.813483   10363 main.go:141] libmachine: STDOUT: 
	I0318 13:56:17.813505   10363 main.go:141] libmachine: STDERR: 
	I0318 13:56:17.813569   10363 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2 +20000M
	I0318 13:56:17.824550   10363 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:17.824567   10363 main.go:141] libmachine: STDERR: 
	I0318 13:56:17.824581   10363 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2
	I0318 13:56:17.824587   10363 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:17.824618   10363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:50:4f:54:40:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/custom-flannel-099000/disk.qcow2
	I0318 13:56:17.826366   10363 main.go:141] libmachine: STDOUT: 
	I0318 13:56:17.826380   10363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:17.826393   10363 client.go:171] duration metric: took 257.371208ms to LocalClient.Create
	I0318 13:56:19.828691   10363 start.go:128] duration metric: took 2.312104041s to createHost
	I0318 13:56:19.828782   10363 start.go:83] releasing machines lock for "custom-flannel-099000", held for 2.3126s
	W0318 13:56:19.829128   10363 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:19.840490   10363 out.go:177] 
	W0318 13:56:19.842672   10363 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:56:19.842868   10363 out.go:239] * 
	* 
	W0318 13:56:19.845634   10363 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:56:19.854453   10363 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.958976709s)

                                                
                                                
-- stdout --
	* [false-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-099000" primary control-plane node in "false-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:56:22.387174   10482 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:56:22.387299   10482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:22.387302   10482 out.go:304] Setting ErrFile to fd 2...
	I0318 13:56:22.387304   10482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:22.387455   10482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:56:22.388539   10482 out.go:298] Setting JSON to false
	I0318 13:56:22.405329   10482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6954,"bootTime":1710788428,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:56:22.405387   10482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:56:22.411773   10482 out.go:177] * [false-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:56:22.423926   10482 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:56:22.418979   10482 notify.go:220] Checking for updates...
	I0318 13:56:22.431922   10482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:56:22.437896   10482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:56:22.444997   10482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:56:22.447925   10482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:56:22.450990   10482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:56:22.454320   10482 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:56:22.454387   10482 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:56:22.454432   10482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:56:22.457857   10482 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:56:22.464930   10482 start.go:297] selected driver: qemu2
	I0318 13:56:22.464935   10482 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:56:22.464940   10482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:56:22.467310   10482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:56:22.470795   10482 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:56:22.475022   10482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:56:22.475052   10482 cni.go:84] Creating CNI manager for "false"
	I0318 13:56:22.475088   10482 start.go:340] cluster config:
	{Name:false-099000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:56:22.480074   10482 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:56:22.487920   10482 out.go:177] * Starting "false-099000" primary control-plane node in "false-099000" cluster
	I0318 13:56:22.491939   10482 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:56:22.491953   10482 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:56:22.491960   10482 cache.go:56] Caching tarball of preloaded images
	I0318 13:56:22.492016   10482 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:56:22.492021   10482 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:56:22.492077   10482 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/false-099000/config.json ...
	I0318 13:56:22.492089   10482 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/false-099000/config.json: {Name:mk710b982f2b04a3495a7556c169e3b68855ccca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:56:22.492296   10482 start.go:360] acquireMachinesLock for false-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:22.492331   10482 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "false-099000"
	I0318 13:56:22.492344   10482 start.go:93] Provisioning new machine with config: &{Name:false-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:22.492389   10482 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:22.500932   10482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:22.517790   10482 start.go:159] libmachine.API.Create for "false-099000" (driver="qemu2")
	I0318 13:56:22.517823   10482 client.go:168] LocalClient.Create starting
	I0318 13:56:22.517877   10482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:22.517904   10482 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:22.517914   10482 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:22.517964   10482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:22.517984   10482 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:22.517991   10482 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:22.518337   10482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:22.660141   10482 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:22.792011   10482 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:22.792019   10482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:22.792199   10482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2
	I0318 13:56:22.804637   10482 main.go:141] libmachine: STDOUT: 
	I0318 13:56:22.804655   10482 main.go:141] libmachine: STDERR: 
	I0318 13:56:22.804719   10482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2 +20000M
	I0318 13:56:22.815524   10482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:22.815540   10482 main.go:141] libmachine: STDERR: 
	I0318 13:56:22.815557   10482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2
	I0318 13:56:22.815561   10482 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:22.815595   10482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8f:de:bc:c0:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2
	I0318 13:56:22.817350   10482 main.go:141] libmachine: STDOUT: 
	I0318 13:56:22.817364   10482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:22.817382   10482 client.go:171] duration metric: took 299.55525ms to LocalClient.Create
	I0318 13:56:24.819684   10482 start.go:128] duration metric: took 2.327230083s to createHost
	I0318 13:56:24.819769   10482 start.go:83] releasing machines lock for "false-099000", held for 2.327439833s
	W0318 13:56:24.819847   10482 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:24.834052   10482 out.go:177] * Deleting "false-099000" in qemu2 ...
	W0318 13:56:24.862752   10482 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:24.862793   10482 start.go:728] Will try again in 5 seconds ...
	I0318 13:56:29.865036   10482 start.go:360] acquireMachinesLock for false-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:29.865554   10482 start.go:364] duration metric: took 402.5µs to acquireMachinesLock for "false-099000"
	I0318 13:56:29.865629   10482 start.go:93] Provisioning new machine with config: &{Name:false-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:29.865901   10482 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:29.874382   10482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:29.922915   10482 start.go:159] libmachine.API.Create for "false-099000" (driver="qemu2")
	I0318 13:56:29.922962   10482 client.go:168] LocalClient.Create starting
	I0318 13:56:29.923061   10482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:29.923118   10482 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:29.923148   10482 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:29.923209   10482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:29.923250   10482 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:29.923260   10482 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:29.923755   10482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:30.075900   10482 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:30.251064   10482 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:30.251078   10482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:30.251291   10482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2
	I0318 13:56:30.264225   10482 main.go:141] libmachine: STDOUT: 
	I0318 13:56:30.264246   10482 main.go:141] libmachine: STDERR: 
	I0318 13:56:30.264301   10482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2 +20000M
	I0318 13:56:30.275460   10482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:30.275478   10482 main.go:141] libmachine: STDERR: 
	I0318 13:56:30.275493   10482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2
	I0318 13:56:30.275501   10482 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:30.275548   10482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:18:70:e7:ae:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/false-099000/disk.qcow2
	I0318 13:56:30.277345   10482 main.go:141] libmachine: STDOUT: 
	I0318 13:56:30.277358   10482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:30.277372   10482 client.go:171] duration metric: took 354.406208ms to LocalClient.Create
	I0318 13:56:32.279449   10482 start.go:128] duration metric: took 2.4135215s to createHost
	I0318 13:56:32.279474   10482 start.go:83] releasing machines lock for "false-099000", held for 2.413910792s
	W0318 13:56:32.279619   10482 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:32.286931   10482 out.go:177] 
	W0318 13:56:32.294946   10482 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:56:32.294951   10482 out.go:239] * 
	* 
	W0318 13:56:32.295460   10482 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:56:32.306882   10482 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.774028834s)

                                                
                                                
-- stdout --
	* [enable-default-cni-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-099000" primary control-plane node in "enable-default-cni-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:56:34.576873   10592 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:56:34.577021   10592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:34.577025   10592 out.go:304] Setting ErrFile to fd 2...
	I0318 13:56:34.577027   10592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:34.577144   10592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:56:34.578220   10592 out.go:298] Setting JSON to false
	I0318 13:56:34.594373   10592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6966,"bootTime":1710788428,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:56:34.594431   10592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:56:34.599442   10592 out.go:177] * [enable-default-cni-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:56:34.606383   10592 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:56:34.606439   10592 notify.go:220] Checking for updates...
	I0318 13:56:34.610522   10592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:56:34.614322   10592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:56:34.617331   10592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:56:34.620369   10592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:56:34.623263   10592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:56:34.626607   10592 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:56:34.626673   10592 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:56:34.626716   10592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:56:34.631357   10592 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:56:34.638365   10592 start.go:297] selected driver: qemu2
	I0318 13:56:34.638371   10592 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:56:34.638377   10592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:56:34.640646   10592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:56:34.645298   10592 out.go:177] * Automatically selected the socket_vmnet network
	E0318 13:56:34.648400   10592 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0318 13:56:34.648419   10592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:56:34.648472   10592 cni.go:84] Creating CNI manager for "bridge"
	I0318 13:56:34.648482   10592 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:56:34.648513   10592 start.go:340] cluster config:
	{Name:enable-default-cni-099000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:56:34.653082   10592 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:56:34.661349   10592 out.go:177] * Starting "enable-default-cni-099000" primary control-plane node in "enable-default-cni-099000" cluster
	I0318 13:56:34.665338   10592 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:56:34.665358   10592 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:56:34.665370   10592 cache.go:56] Caching tarball of preloaded images
	I0318 13:56:34.665439   10592 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:56:34.665445   10592 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:56:34.665523   10592 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/enable-default-cni-099000/config.json ...
	I0318 13:56:34.665537   10592 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/enable-default-cni-099000/config.json: {Name:mk5c5a7dd3741a05a5db476b9fd0c7c7f55c0a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:56:34.665755   10592 start.go:360] acquireMachinesLock for enable-default-cni-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:34.665789   10592 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "enable-default-cni-099000"
	I0318 13:56:34.665803   10592 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:34.665855   10592 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:34.674348   10592 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:34.691909   10592 start.go:159] libmachine.API.Create for "enable-default-cni-099000" (driver="qemu2")
	I0318 13:56:34.691946   10592 client.go:168] LocalClient.Create starting
	I0318 13:56:34.692007   10592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:34.692040   10592 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:34.692055   10592 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:34.692100   10592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:34.692122   10592 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:34.692131   10592 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:34.692567   10592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:34.852104   10592 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:34.941801   10592 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:34.941807   10592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:34.941986   10592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2
	I0318 13:56:34.954168   10592 main.go:141] libmachine: STDOUT: 
	I0318 13:56:34.954190   10592 main.go:141] libmachine: STDERR: 
	I0318 13:56:34.954248   10592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2 +20000M
	I0318 13:56:34.965027   10592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:34.965048   10592 main.go:141] libmachine: STDERR: 
	I0318 13:56:34.965068   10592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2
	I0318 13:56:34.965073   10592 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:34.965103   10592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:99:2b:d8:d8:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2
	I0318 13:56:34.966833   10592 main.go:141] libmachine: STDOUT: 
	I0318 13:56:34.966852   10592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:34.966873   10592 client.go:171] duration metric: took 274.922792ms to LocalClient.Create
	I0318 13:56:36.969066   10592 start.go:128] duration metric: took 2.303207916s to createHost
	I0318 13:56:36.969109   10592 start.go:83] releasing machines lock for "enable-default-cni-099000", held for 2.303322541s
	W0318 13:56:36.969173   10592 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:36.983156   10592 out.go:177] * Deleting "enable-default-cni-099000" in qemu2 ...
	W0318 13:56:37.003699   10592 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:37.003731   10592 start.go:728] Will try again in 5 seconds ...
	I0318 13:56:42.006079   10592 start.go:360] acquireMachinesLock for enable-default-cni-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:42.006760   10592 start.go:364] duration metric: took 508.167µs to acquireMachinesLock for "enable-default-cni-099000"
	I0318 13:56:42.006965   10592 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:42.007375   10592 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:42.018215   10592 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:42.059599   10592 start.go:159] libmachine.API.Create for "enable-default-cni-099000" (driver="qemu2")
	I0318 13:56:42.059666   10592 client.go:168] LocalClient.Create starting
	I0318 13:56:42.059801   10592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:42.059872   10592 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:42.059892   10592 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:42.059960   10592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:42.060003   10592 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:42.060017   10592 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:42.060570   10592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:42.210246   10592 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:42.248868   10592 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:42.248874   10592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:42.249050   10592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2
	I0318 13:56:42.261527   10592 main.go:141] libmachine: STDOUT: 
	I0318 13:56:42.261545   10592 main.go:141] libmachine: STDERR: 
	I0318 13:56:42.261596   10592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2 +20000M
	I0318 13:56:42.272800   10592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:42.272821   10592 main.go:141] libmachine: STDERR: 
	I0318 13:56:42.272836   10592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2
	I0318 13:56:42.272840   10592 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:42.272884   10592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:5f:68:73:2c:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/enable-default-cni-099000/disk.qcow2
	I0318 13:56:42.274857   10592 main.go:141] libmachine: STDOUT: 
	I0318 13:56:42.274878   10592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:42.274891   10592 client.go:171] duration metric: took 215.219542ms to LocalClient.Create
	I0318 13:56:44.277077   10592 start.go:128] duration metric: took 2.269677167s to createHost
	I0318 13:56:44.277209   10592 start.go:83] releasing machines lock for "enable-default-cni-099000", held for 2.270423083s
	W0318 13:56:44.277659   10592 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:44.287295   10592 out.go:177] 
	W0318 13:56:44.294408   10592 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:56:44.294498   10592 out.go:239] * 
	* 
	W0318 13:56:44.297511   10592 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:56:44.306351   10592 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.065852875s)

                                                
                                                
-- stdout --
	* [flannel-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-099000" primary control-plane node in "flannel-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:56:46.835490   10706 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:56:46.835614   10706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:46.835617   10706 out.go:304] Setting ErrFile to fd 2...
	I0318 13:56:46.835619   10706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:46.835748   10706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:56:46.836829   10706 out.go:298] Setting JSON to false
	I0318 13:56:46.852999   10706 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6978,"bootTime":1710788428,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:56:46.853066   10706 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:56:46.858106   10706 out.go:177] * [flannel-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:56:46.866027   10706 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:56:46.866084   10706 notify.go:220] Checking for updates...
	I0318 13:56:46.871995   10706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:56:46.876243   10706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:56:46.877669   10706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:56:46.880961   10706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:56:46.884018   10706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:56:46.887375   10706 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:56:46.887439   10706 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:56:46.887501   10706 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:56:46.891901   10706 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:56:46.898945   10706 start.go:297] selected driver: qemu2
	I0318 13:56:46.898955   10706 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:56:46.898960   10706 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:56:46.901018   10706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:56:46.903881   10706 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:56:46.907035   10706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:56:46.907075   10706 cni.go:84] Creating CNI manager for "flannel"
	I0318 13:56:46.907092   10706 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0318 13:56:46.907123   10706 start.go:340] cluster config:
	{Name:flannel-099000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:56:46.911218   10706 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:56:46.918935   10706 out.go:177] * Starting "flannel-099000" primary control-plane node in "flannel-099000" cluster
	I0318 13:56:46.922958   10706 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:56:46.922970   10706 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:56:46.922975   10706 cache.go:56] Caching tarball of preloaded images
	I0318 13:56:46.923033   10706 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:56:46.923038   10706 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:56:46.923086   10706 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/flannel-099000/config.json ...
	I0318 13:56:46.923097   10706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/flannel-099000/config.json: {Name:mk23263f1cfc77e9eb46e91452b230d135f5899a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:56:46.923312   10706 start.go:360] acquireMachinesLock for flannel-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:46.923339   10706 start.go:364] duration metric: took 21.792µs to acquireMachinesLock for "flannel-099000"
	I0318 13:56:46.923350   10706 start.go:93] Provisioning new machine with config: &{Name:flannel-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:46.923377   10706 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:46.927982   10706 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:46.942048   10706 start.go:159] libmachine.API.Create for "flannel-099000" (driver="qemu2")
	I0318 13:56:46.942086   10706 client.go:168] LocalClient.Create starting
	I0318 13:56:46.942149   10706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:46.942181   10706 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:46.942190   10706 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:46.942239   10706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:46.942260   10706 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:46.942271   10706 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:46.942627   10706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:47.083498   10706 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:47.494128   10706 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:47.494138   10706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:47.494339   10706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2
	I0318 13:56:47.507111   10706 main.go:141] libmachine: STDOUT: 
	I0318 13:56:47.507145   10706 main.go:141] libmachine: STDERR: 
	I0318 13:56:47.507204   10706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2 +20000M
	I0318 13:56:47.518607   10706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:47.518624   10706 main.go:141] libmachine: STDERR: 
	I0318 13:56:47.518646   10706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2
	I0318 13:56:47.518652   10706 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:47.518685   10706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:75:07:be:a3:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2
	I0318 13:56:47.520554   10706 main.go:141] libmachine: STDOUT: 
	I0318 13:56:47.520573   10706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:47.520591   10706 client.go:171] duration metric: took 578.50275ms to LocalClient.Create
	I0318 13:56:49.522725   10706 start.go:128] duration metric: took 2.599349s to createHost
	I0318 13:56:49.522775   10706 start.go:83] releasing machines lock for "flannel-099000", held for 2.599438458s
	W0318 13:56:49.522846   10706 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:49.540072   10706 out.go:177] * Deleting "flannel-099000" in qemu2 ...
	W0318 13:56:49.562059   10706 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:49.562081   10706 start.go:728] Will try again in 5 seconds ...
	I0318 13:56:54.564211   10706 start.go:360] acquireMachinesLock for flannel-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:54.564478   10706 start.go:364] duration metric: took 190.375µs to acquireMachinesLock for "flannel-099000"
	I0318 13:56:54.564559   10706 start.go:93] Provisioning new machine with config: &{Name:flannel-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:54.564686   10706 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:54.570247   10706 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:54.608033   10706 start.go:159] libmachine.API.Create for "flannel-099000" (driver="qemu2")
	I0318 13:56:54.608152   10706 client.go:168] LocalClient.Create starting
	I0318 13:56:54.608247   10706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:54.608301   10706 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:54.608318   10706 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:54.608375   10706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:54.608418   10706 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:54.608431   10706 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:54.608904   10706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:54.758390   10706 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:54.802939   10706 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:54.802945   10706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:54.803512   10706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2
	I0318 13:56:54.815960   10706 main.go:141] libmachine: STDOUT: 
	I0318 13:56:54.815982   10706 main.go:141] libmachine: STDERR: 
	I0318 13:56:54.816035   10706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2 +20000M
	I0318 13:56:54.826773   10706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:54.826791   10706 main.go:141] libmachine: STDERR: 
	I0318 13:56:54.826801   10706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2
	I0318 13:56:54.826805   10706 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:54.826843   10706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:87:da:f9:4d:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/flannel-099000/disk.qcow2
	I0318 13:56:54.828578   10706 main.go:141] libmachine: STDOUT: 
	I0318 13:56:54.828596   10706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:54.828617   10706 client.go:171] duration metric: took 220.451125ms to LocalClient.Create
	I0318 13:56:56.830783   10706 start.go:128] duration metric: took 2.266083291s to createHost
	I0318 13:56:56.830837   10706 start.go:83] releasing machines lock for "flannel-099000", held for 2.266352166s
	W0318 13:56:56.831062   10706 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:56:56.841485   10706 out.go:177] 
	W0318 13:56:56.845598   10706 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:56:56.845633   10706 out.go:239] * 
	* 
	W0318 13:56:56.847884   10706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:56:56.857452   10706 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.751404958s)

                                                
                                                
-- stdout --
	* [bridge-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-099000" primary control-plane node in "bridge-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:56:59.362453   10825 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:56:59.362578   10825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:59.362584   10825 out.go:304] Setting ErrFile to fd 2...
	I0318 13:56:59.362586   10825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:56:59.362715   10825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:56:59.363807   10825 out.go:298] Setting JSON to false
	I0318 13:56:59.379921   10825 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6991,"bootTime":1710788428,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:56:59.379981   10825 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:56:59.386766   10825 out.go:177] * [bridge-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:56:59.393694   10825 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:56:59.397722   10825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:56:59.393732   10825 notify.go:220] Checking for updates...
	I0318 13:56:59.400632   10825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:56:59.403594   10825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:56:59.407660   10825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:56:59.410622   10825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:56:59.413931   10825 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:56:59.413991   10825 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:56:59.414040   10825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:56:59.418660   10825 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:56:59.425663   10825 start.go:297] selected driver: qemu2
	I0318 13:56:59.425675   10825 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:56:59.425680   10825 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:56:59.427835   10825 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:56:59.431702   10825 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:56:59.434702   10825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:56:59.434734   10825 cni.go:84] Creating CNI manager for "bridge"
	I0318 13:56:59.434737   10825 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:56:59.434768   10825 start.go:340] cluster config:
	{Name:bridge-099000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:56:59.438992   10825 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:56:59.446455   10825 out.go:177] * Starting "bridge-099000" primary control-plane node in "bridge-099000" cluster
	I0318 13:56:59.450621   10825 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:56:59.450649   10825 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:56:59.450653   10825 cache.go:56] Caching tarball of preloaded images
	I0318 13:56:59.450731   10825 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:56:59.450739   10825 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:56:59.450802   10825 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/bridge-099000/config.json ...
	I0318 13:56:59.450815   10825 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/bridge-099000/config.json: {Name:mk12cb40327ab652248260c3f534240835e81806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:56:59.451112   10825 start.go:360] acquireMachinesLock for bridge-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:56:59.451143   10825 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "bridge-099000"
	I0318 13:56:59.451156   10825 start.go:93] Provisioning new machine with config: &{Name:bridge-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:56:59.451199   10825 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:56:59.454615   10825 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:56:59.469988   10825 start.go:159] libmachine.API.Create for "bridge-099000" (driver="qemu2")
	I0318 13:56:59.470020   10825 client.go:168] LocalClient.Create starting
	I0318 13:56:59.470087   10825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:56:59.470117   10825 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:59.470130   10825 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:59.470176   10825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:56:59.470197   10825 main.go:141] libmachine: Decoding PEM data...
	I0318 13:56:59.470207   10825 main.go:141] libmachine: Parsing certificate...
	I0318 13:56:59.470565   10825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:56:59.613687   10825 main.go:141] libmachine: Creating SSH key...
	I0318 13:56:59.689156   10825 main.go:141] libmachine: Creating Disk image...
	I0318 13:56:59.689162   10825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:56:59.689332   10825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2
	I0318 13:56:59.701921   10825 main.go:141] libmachine: STDOUT: 
	I0318 13:56:59.701939   10825 main.go:141] libmachine: STDERR: 
	I0318 13:56:59.702012   10825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2 +20000M
	I0318 13:56:59.713075   10825 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:56:59.713097   10825 main.go:141] libmachine: STDERR: 
	I0318 13:56:59.713111   10825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2
	I0318 13:56:59.713115   10825 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:56:59.713147   10825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:5d:c2:0f:ff:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2
	I0318 13:56:59.715069   10825 main.go:141] libmachine: STDOUT: 
	I0318 13:56:59.715099   10825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:56:59.715119   10825 client.go:171] duration metric: took 245.095333ms to LocalClient.Create
	I0318 13:57:01.717231   10825 start.go:128] duration metric: took 2.266039s to createHost
	I0318 13:57:01.717249   10825 start.go:83] releasing machines lock for "bridge-099000", held for 2.266112958s
	W0318 13:57:01.717263   10825 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:01.725673   10825 out.go:177] * Deleting "bridge-099000" in qemu2 ...
	W0318 13:57:01.738380   10825 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:01.738390   10825 start.go:728] Will try again in 5 seconds ...
	I0318 13:57:06.740492   10825 start.go:360] acquireMachinesLock for bridge-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:06.740638   10825 start.go:364] duration metric: took 112.791µs to acquireMachinesLock for "bridge-099000"
	I0318 13:57:06.740676   10825 start.go:93] Provisioning new machine with config: &{Name:bridge-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:06.740734   10825 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:06.750383   10825 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:57:06.766168   10825 start.go:159] libmachine.API.Create for "bridge-099000" (driver="qemu2")
	I0318 13:57:06.766202   10825 client.go:168] LocalClient.Create starting
	I0318 13:57:06.766264   10825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:06.766295   10825 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:06.766305   10825 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:06.766339   10825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:06.766359   10825 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:06.766366   10825 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:06.766627   10825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:06.909805   10825 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:07.014593   10825 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:07.014600   10825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:07.014794   10825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2
	I0318 13:57:07.027957   10825 main.go:141] libmachine: STDOUT: 
	I0318 13:57:07.027981   10825 main.go:141] libmachine: STDERR: 
	I0318 13:57:07.028065   10825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2 +20000M
	I0318 13:57:07.039050   10825 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:07.039074   10825 main.go:141] libmachine: STDERR: 
	I0318 13:57:07.039087   10825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2
	I0318 13:57:07.039093   10825 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:07.039123   10825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d9:60:0e:14:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/bridge-099000/disk.qcow2
	I0318 13:57:07.040944   10825 main.go:141] libmachine: STDOUT: 
	I0318 13:57:07.040961   10825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:07.040972   10825 client.go:171] duration metric: took 274.767458ms to LocalClient.Create
	I0318 13:57:09.043121   10825 start.go:128] duration metric: took 2.302368958s to createHost
	I0318 13:57:09.043166   10825 start.go:83] releasing machines lock for "bridge-099000", held for 2.302533667s
	W0318 13:57:09.043305   10825 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:09.053700   10825 out.go:177] 
	W0318 13:57:09.060802   10825 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:09.060817   10825 out.go:239] * 
	* 
	W0318 13:57:09.061696   10825 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:09.073591   10825 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-099000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (11.400749458s)

                                                
                                                
-- stdout --
	* [kubenet-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-099000" primary control-plane node in "kubenet-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:11.358830   10939 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:11.358964   10939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:11.358968   10939 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:11.358970   10939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:11.359090   10939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:11.360255   10939 out.go:298] Setting JSON to false
	I0318 13:57:11.376539   10939 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7003,"bootTime":1710788428,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:57:11.376612   10939 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:57:11.383650   10939 out.go:177] * [kubenet-099000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:57:11.391554   10939 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:57:11.394590   10939 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:57:11.391652   10939 notify.go:220] Checking for updates...
	I0318 13:57:11.401528   10939 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:57:11.404567   10939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:57:11.407635   10939 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:57:11.410620   10939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:57:11.413957   10939 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:11.414026   10939 config.go:182] Loaded profile config "stopped-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 13:57:11.414070   10939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:57:11.418604   10939 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:57:11.425578   10939 start.go:297] selected driver: qemu2
	I0318 13:57:11.425584   10939 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:57:11.425590   10939 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:57:11.427701   10939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:57:11.431598   10939 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:57:11.434711   10939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:57:11.434767   10939 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0318 13:57:11.434799   10939 start.go:340] cluster config:
	{Name:kubenet-099000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:11.439081   10939 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:11.445517   10939 out.go:177] * Starting "kubenet-099000" primary control-plane node in "kubenet-099000" cluster
	I0318 13:57:11.449593   10939 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:57:11.449608   10939 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:57:11.449616   10939 cache.go:56] Caching tarball of preloaded images
	I0318 13:57:11.449667   10939 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:57:11.449673   10939 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:57:11.449730   10939 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/kubenet-099000/config.json ...
	I0318 13:57:11.449741   10939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/kubenet-099000/config.json: {Name:mke67c261f1d8b7ad9288f1c5126d3c5ad1d0baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:57:11.449962   10939 start.go:360] acquireMachinesLock for kubenet-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:11.449991   10939 start.go:364] duration metric: took 24.083µs to acquireMachinesLock for "kubenet-099000"
	I0318 13:57:11.450008   10939 start.go:93] Provisioning new machine with config: &{Name:kubenet-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:11.450045   10939 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:11.457552   10939 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:57:11.472402   10939 start.go:159] libmachine.API.Create for "kubenet-099000" (driver="qemu2")
	I0318 13:57:11.472432   10939 client.go:168] LocalClient.Create starting
	I0318 13:57:11.472500   10939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:11.472535   10939 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:11.472543   10939 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:11.472591   10939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:11.472626   10939 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:11.472631   10939 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:11.472987   10939 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:11.615307   10939 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:11.642633   10939 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:11.642649   10939 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:11.642845   10939 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2
	I0318 13:57:11.655483   10939 main.go:141] libmachine: STDOUT: 
	I0318 13:57:11.655503   10939 main.go:141] libmachine: STDERR: 
	I0318 13:57:11.655570   10939 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2 +20000M
	I0318 13:57:11.666905   10939 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:11.666927   10939 main.go:141] libmachine: STDERR: 
	I0318 13:57:11.666941   10939 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2
	I0318 13:57:11.666947   10939 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:11.666979   10939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:2b:ef:4d:90:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2
	I0318 13:57:11.668850   10939 main.go:141] libmachine: STDOUT: 
	I0318 13:57:11.668866   10939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:11.668884   10939 client.go:171] duration metric: took 196.446792ms to LocalClient.Create
	I0318 13:57:13.671117   10939 start.go:128] duration metric: took 2.22105225s to createHost
	I0318 13:57:13.671201   10939 start.go:83] releasing machines lock for "kubenet-099000", held for 2.221212875s
	W0318 13:57:13.671288   10939 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:13.678083   10939 out.go:177] * Deleting "kubenet-099000" in qemu2 ...
	W0318 13:57:13.707520   10939 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:13.707545   10939 start.go:728] Will try again in 5 seconds ...
	I0318 13:57:18.707760   10939 start.go:360] acquireMachinesLock for kubenet-099000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:20.268180   10939 start.go:364] duration metric: took 1.560268375s to acquireMachinesLock for "kubenet-099000"
	I0318 13:57:20.268282   10939 start.go:93] Provisioning new machine with config: &{Name:kubenet-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:20.268485   10939 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:20.277159   10939 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 13:57:20.323663   10939 start.go:159] libmachine.API.Create for "kubenet-099000" (driver="qemu2")
	I0318 13:57:20.323714   10939 client.go:168] LocalClient.Create starting
	I0318 13:57:20.323857   10939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:20.323932   10939 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:20.323949   10939 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:20.324004   10939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:20.324046   10939 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:20.324061   10939 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:20.324588   10939 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:20.485130   10939 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:20.651173   10939 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:20.651182   10939 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:20.651364   10939 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2
	I0318 13:57:20.664418   10939 main.go:141] libmachine: STDOUT: 
	I0318 13:57:20.664441   10939 main.go:141] libmachine: STDERR: 
	I0318 13:57:20.664519   10939 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2 +20000M
	I0318 13:57:20.675062   10939 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:20.675077   10939 main.go:141] libmachine: STDERR: 
	I0318 13:57:20.675088   10939 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2
	I0318 13:57:20.675093   10939 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:20.675133   10939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:6f:42:7c:08:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/kubenet-099000/disk.qcow2
	I0318 13:57:20.676854   10939 main.go:141] libmachine: STDOUT: 
	I0318 13:57:20.676870   10939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:20.676882   10939 client.go:171] duration metric: took 353.165833ms to LocalClient.Create
	I0318 13:57:22.678835   10939 start.go:128] duration metric: took 2.410311625s to createHost
	I0318 13:57:22.678933   10939 start.go:83] releasing machines lock for "kubenet-099000", held for 2.410724833s
	W0318 13:57:22.679296   10939 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:22.693963   10939 out.go:177] 
	W0318 13:57:22.702068   10939 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:22.702095   10939 out.go:239] * 
	* 
	W0318 13:57:22.704697   10939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:22.713874   10939 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (11.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (11.841312042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-255000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-255000" primary control-plane node in "old-k8s-version-255000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-255000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:17.948308   10953 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:17.948455   10953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:17.948458   10953 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:17.948460   10953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:17.948590   10953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:17.949566   10953 out.go:298] Setting JSON to false
	I0318 13:57:17.965582   10953 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7009,"bootTime":1710788428,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:57:17.965649   10953 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:57:17.970898   10953 out.go:177] * [old-k8s-version-255000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:57:17.976900   10953 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:57:17.980828   10953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:57:17.976903   10953 notify.go:220] Checking for updates...
	I0318 13:57:17.984893   10953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:57:17.987775   10953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:57:17.990823   10953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:57:17.993858   10953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:57:17.997048   10953 config.go:182] Loaded profile config "kubenet-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:17.997131   10953 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:17.997188   10953 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:57:18.001804   10953 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:57:18.007797   10953 start.go:297] selected driver: qemu2
	I0318 13:57:18.007804   10953 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:57:18.007810   10953 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:57:18.010094   10953 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:57:18.012810   10953 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:57:18.016943   10953 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:57:18.016994   10953 cni.go:84] Creating CNI manager for ""
	I0318 13:57:18.017003   10953 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 13:57:18.017038   10953 start.go:340] cluster config:
	{Name:old-k8s-version-255000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:18.021597   10953 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:18.029826   10953 out.go:177] * Starting "old-k8s-version-255000" primary control-plane node in "old-k8s-version-255000" cluster
	I0318 13:57:18.033657   10953 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 13:57:18.033672   10953 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 13:57:18.033680   10953 cache.go:56] Caching tarball of preloaded images
	I0318 13:57:18.033734   10953 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:57:18.033740   10953 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 13:57:18.033812   10953 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/old-k8s-version-255000/config.json ...
	I0318 13:57:18.033824   10953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/old-k8s-version-255000/config.json: {Name:mkada4b4e1d27d85b8ce73b79f9c7675d03d9370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:57:18.034033   10953 start.go:360] acquireMachinesLock for old-k8s-version-255000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:18.034070   10953 start.go:364] duration metric: took 27µs to acquireMachinesLock for "old-k8s-version-255000"
	I0318 13:57:18.034084   10953 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-255000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:18.034118   10953 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:18.041694   10953 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:57:18.059455   10953 start.go:159] libmachine.API.Create for "old-k8s-version-255000" (driver="qemu2")
	I0318 13:57:18.059482   10953 client.go:168] LocalClient.Create starting
	I0318 13:57:18.059547   10953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:18.059576   10953 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:18.059594   10953 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:18.059641   10953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:18.059662   10953 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:18.059670   10953 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:18.060053   10953 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:18.206410   10953 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:18.240712   10953 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:18.240718   10953 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:18.240880   10953 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2
	I0318 13:57:18.252999   10953 main.go:141] libmachine: STDOUT: 
	I0318 13:57:18.253030   10953 main.go:141] libmachine: STDERR: 
	I0318 13:57:18.253081   10953 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2 +20000M
	I0318 13:57:18.263857   10953 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:18.263873   10953 main.go:141] libmachine: STDERR: 
	I0318 13:57:18.263889   10953 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2
	I0318 13:57:18.263894   10953 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:18.263942   10953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:3d:c7:22:a9:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2
	I0318 13:57:18.265715   10953 main.go:141] libmachine: STDOUT: 
	I0318 13:57:18.265734   10953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:18.265755   10953 client.go:171] duration metric: took 206.269625ms to LocalClient.Create
	I0318 13:57:20.267957   10953 start.go:128] duration metric: took 2.233828208s to createHost
	I0318 13:57:20.268027   10953 start.go:83] releasing machines lock for "old-k8s-version-255000", held for 2.233959209s
	W0318 13:57:20.268079   10953 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:20.287171   10953 out.go:177] * Deleting "old-k8s-version-255000" in qemu2 ...
	W0318 13:57:20.305530   10953 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:20.305555   10953 start.go:728] Will try again in 5 seconds ...
	I0318 13:57:25.307620   10953 start.go:360] acquireMachinesLock for old-k8s-version-255000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:27.366546   10953 start.go:364] duration metric: took 2.058892334s to acquireMachinesLock for "old-k8s-version-255000"
	I0318 13:57:27.366643   10953 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-255000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:27.366895   10953 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:27.381527   10953 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:57:27.432788   10953 start.go:159] libmachine.API.Create for "old-k8s-version-255000" (driver="qemu2")
	I0318 13:57:27.432840   10953 client.go:168] LocalClient.Create starting
	I0318 13:57:27.432942   10953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:27.433017   10953 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:27.433033   10953 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:27.433093   10953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:27.433140   10953 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:27.433162   10953 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:27.433661   10953 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:27.593310   10953 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:27.682200   10953 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:27.682207   10953 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:27.682380   10953 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2
	I0318 13:57:27.694735   10953 main.go:141] libmachine: STDOUT: 
	I0318 13:57:27.694759   10953 main.go:141] libmachine: STDERR: 
	I0318 13:57:27.694811   10953 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2 +20000M
	I0318 13:57:27.705861   10953 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:27.705886   10953 main.go:141] libmachine: STDERR: 
	I0318 13:57:27.705939   10953 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2
	I0318 13:57:27.705954   10953 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:27.705991   10953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:d7:89:38:4c:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2
	I0318 13:57:27.707786   10953 main.go:141] libmachine: STDOUT: 
	I0318 13:57:27.707805   10953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:27.707816   10953 client.go:171] duration metric: took 274.972542ms to LocalClient.Create
	I0318 13:57:29.708161   10953 start.go:128] duration metric: took 2.341234708s to createHost
	I0318 13:57:29.708242   10953 start.go:83] releasing machines lock for "old-k8s-version-255000", held for 2.34166825s
	W0318 13:57:29.708543   10953 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-255000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-255000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:29.725298   10953 out.go:177] 
	W0318 13:57:29.732893   10953 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:29.732925   10953 out.go:239] * 
	* 
	W0318 13:57:29.735703   10953 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:29.745610   10953 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (66.473834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (11.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-205000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-205000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.906265541s)

                                                
                                                
-- stdout --
	* [no-preload-205000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-205000" primary control-plane node in "no-preload-205000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-205000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:24.973045   11063 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:24.973170   11063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:24.973176   11063 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:24.973190   11063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:24.973315   11063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:24.974423   11063 out.go:298] Setting JSON to false
	I0318 13:57:24.990407   11063 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7016,"bootTime":1710788428,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:57:24.990491   11063 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:57:24.996158   11063 out.go:177] * [no-preload-205000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:57:25.002151   11063 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:57:25.002195   11063 notify.go:220] Checking for updates...
	I0318 13:57:25.006163   11063 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:57:25.009147   11063 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:57:25.012225   11063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:57:25.015111   11063 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:57:25.018120   11063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:57:25.021452   11063 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:25.021519   11063 config.go:182] Loaded profile config "old-k8s-version-255000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 13:57:25.021565   11063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:57:25.025050   11063 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:57:25.032140   11063 start.go:297] selected driver: qemu2
	I0318 13:57:25.032145   11063 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:57:25.032150   11063 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:57:25.034225   11063 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:57:25.035666   11063 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:57:25.039260   11063 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:57:25.039297   11063 cni.go:84] Creating CNI manager for ""
	I0318 13:57:25.039305   11063 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:57:25.039313   11063 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:57:25.039345   11063 start.go:340] cluster config:
	{Name:no-preload-205000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:25.043718   11063 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:25.047201   11063 out.go:177] * Starting "no-preload-205000" primary control-plane node in "no-preload-205000" cluster
	I0318 13:57:25.055140   11063 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 13:57:25.055236   11063 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/no-preload-205000/config.json ...
	I0318 13:57:25.055254   11063 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/no-preload-205000/config.json: {Name:mk8336d6526dd63048025c0652e23c38f1a3a6c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:57:25.055304   11063 cache.go:107] acquiring lock: {Name:mk189d694ac9f9bf1008521ce7d7ba734fe35b8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:25.055306   11063 cache.go:107] acquiring lock: {Name:mk398330268077967ffce2dc9a8d62348f59afaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:25.055315   11063 cache.go:107] acquiring lock: {Name:mk87a912866bad82ed77fec052abc1ee549cd109 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:25.055374   11063 cache.go:115] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 13:57:25.055383   11063 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.125µs
	I0318 13:57:25.055389   11063 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 13:57:25.055398   11063 cache.go:107] acquiring lock: {Name:mke2593113ffc161eee833113097f92d52b8af4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:25.055459   11063 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:57:25.055480   11063 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:57:25.055511   11063 cache.go:107] acquiring lock: {Name:mk2dadb96952744d9d91b9cac79cdedf004695d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:25.055514   11063 cache.go:107] acquiring lock: {Name:mk58b946f4193488078a1c9bd02b6cf2cfadc68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:25.055509   11063 cache.go:107] acquiring lock: {Name:mkf8c28ddf89cdfa144a055453b897f79301e1ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:25.055553   11063 cache.go:107] acquiring lock: {Name:mk31ec1f5f58bb2c039516376fd89df94e487cc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:25.055649   11063 start.go:360] acquireMachinesLock for no-preload-205000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:25.055688   11063 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:57:25.055706   11063 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:57:25.055754   11063 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:57:25.055756   11063 start.go:364] duration metric: took 96.125µs to acquireMachinesLock for "no-preload-205000"
	I0318 13:57:25.055768   11063 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:57:25.055800   11063 start.go:93] Provisioning new machine with config: &{Name:no-preload-205000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:25.055864   11063 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:25.064152   11063 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:57:25.055942   11063 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 13:57:25.069018   11063 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:57:25.069139   11063 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:57:25.073959   11063 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:57:25.074078   11063 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:57:25.074131   11063 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:57:25.074185   11063 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:57:25.074320   11063 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 13:57:25.081881   11063 start.go:159] libmachine.API.Create for "no-preload-205000" (driver="qemu2")
	I0318 13:57:25.081906   11063 client.go:168] LocalClient.Create starting
	I0318 13:57:25.081993   11063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:25.082024   11063 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:25.082034   11063 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:25.082090   11063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:25.082113   11063 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:25.082119   11063 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:25.082540   11063 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:25.235199   11063 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:25.339736   11063 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:25.339766   11063 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:25.339985   11063 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2
	I0318 13:57:25.352823   11063 main.go:141] libmachine: STDOUT: 
	I0318 13:57:25.352846   11063 main.go:141] libmachine: STDERR: 
	I0318 13:57:25.352900   11063 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2 +20000M
	I0318 13:57:25.364068   11063 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:25.364086   11063 main.go:141] libmachine: STDERR: 
	I0318 13:57:25.364097   11063 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2
	I0318 13:57:25.364100   11063 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:25.364126   11063 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:14:8b:33:aa:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2
	I0318 13:57:25.365979   11063 main.go:141] libmachine: STDOUT: 
	I0318 13:57:25.365992   11063 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:25.366011   11063 client.go:171] duration metric: took 284.101459ms to LocalClient.Create
	I0318 13:57:26.972324   11063 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0318 13:57:27.082286   11063 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0318 13:57:27.082313   11063 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.026899167s
	I0318 13:57:27.082331   11063 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0318 13:57:27.119672   11063 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 13:57:27.125268   11063 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 13:57:27.128264   11063 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 13:57:27.129833   11063 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0318 13:57:27.140877   11063 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 13:57:27.141298   11063 cache.go:162] opening:  /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 13:57:27.366327   11063 start.go:128] duration metric: took 2.310452125s to createHost
	I0318 13:57:27.366400   11063 start.go:83] releasing machines lock for "no-preload-205000", held for 2.310640584s
	W0318 13:57:27.366453   11063 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:27.392810   11063 out.go:177] * Deleting "no-preload-205000" in qemu2 ...
	W0318 13:57:27.414036   11063 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:27.414064   11063 start.go:728] Will try again in 5 seconds ...
	I0318 13:57:29.441749   11063 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 13:57:29.441831   11063 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 4.386318333s
	I0318 13:57:29.441866   11063 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 13:57:30.176741   11063 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 13:57:30.176755   11063 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 5.121383167s
	I0318 13:57:30.176764   11063 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 13:57:31.051700   11063 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 13:57:31.051765   11063 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.996342416s
	I0318 13:57:31.051797   11063 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 13:57:31.325879   11063 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 13:57:31.325932   11063 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 6.270660417s
	I0318 13:57:31.325956   11063 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 13:57:31.786613   11063 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 13:57:31.786677   11063 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 6.731418333s
	I0318 13:57:31.786703   11063 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 13:57:32.414384   11063 start.go:360] acquireMachinesLock for no-preload-205000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:32.414716   11063 start.go:364] duration metric: took 258.625µs to acquireMachinesLock for "no-preload-205000"
	I0318 13:57:32.414771   11063 start.go:93] Provisioning new machine with config: &{Name:no-preload-205000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:32.414999   11063 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:32.424607   11063 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:57:32.474048   11063 start.go:159] libmachine.API.Create for "no-preload-205000" (driver="qemu2")
	I0318 13:57:32.474107   11063 client.go:168] LocalClient.Create starting
	I0318 13:57:32.474191   11063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:32.474244   11063 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:32.474262   11063 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:32.474314   11063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:32.474341   11063 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:32.474351   11063 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:32.474903   11063 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:32.637432   11063 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:32.778820   11063 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:32.778829   11063 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:32.778997   11063 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2
	I0318 13:57:32.791754   11063 main.go:141] libmachine: STDOUT: 
	I0318 13:57:32.791787   11063 main.go:141] libmachine: STDERR: 
	I0318 13:57:32.791866   11063 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2 +20000M
	I0318 13:57:32.803124   11063 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:32.803147   11063 main.go:141] libmachine: STDERR: 
	I0318 13:57:32.803165   11063 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2
	I0318 13:57:32.803172   11063 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:32.803224   11063 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:0f:a2:e1:9b:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2
	I0318 13:57:32.805035   11063 main.go:141] libmachine: STDOUT: 
	I0318 13:57:32.805049   11063 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:32.805066   11063 client.go:171] duration metric: took 330.954625ms to LocalClient.Create
	I0318 13:57:34.520970   11063 cache.go:157] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 13:57:34.521059   11063 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 9.465665875s
	I0318 13:57:34.521083   11063 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 13:57:34.521136   11063 cache.go:87] Successfully saved all images to host disk.
	I0318 13:57:34.807256   11063 start.go:128] duration metric: took 2.3922275s to createHost
	I0318 13:57:34.807333   11063 start.go:83] releasing machines lock for "no-preload-205000", held for 2.392607666s
	W0318 13:57:34.807694   11063 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-205000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-205000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:34.815070   11063 out.go:177] 
	W0318 13:57:34.821981   11063 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:34.822012   11063 out.go:239] * 
	* 
	W0318 13:57:34.824762   11063 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:34.833028   11063 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-205000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (67.761791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-255000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-255000 create -f testdata/busybox.yaml: exit status 1 (30.80825ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-255000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-255000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (31.287291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (30.886292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-255000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-255000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-255000 describe deploy/metrics-server -n kube-system: exit status 1 (26.583792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-255000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-255000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (31.375667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.207298541s)

                                                
                                                
-- stdout --
	* [old-k8s-version-255000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-255000" primary control-plane node in "old-k8s-version-255000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-255000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-255000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:32.114864   11136 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:32.114988   11136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:32.114991   11136 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:32.114993   11136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:32.115120   11136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:32.116114   11136 out.go:298] Setting JSON to false
	I0318 13:57:32.132146   11136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7024,"bootTime":1710788428,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:57:32.132214   11136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:57:32.137229   11136 out.go:177] * [old-k8s-version-255000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:57:32.148155   11136 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:57:32.144289   11136 notify.go:220] Checking for updates...
	I0318 13:57:32.155105   11136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:57:32.158178   11136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:57:32.162202   11136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:57:32.170174   11136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:57:32.174220   11136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:57:32.177554   11136 config.go:182] Loaded profile config "old-k8s-version-255000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 13:57:32.181121   11136 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 13:57:32.185240   11136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:57:32.188157   11136 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:57:32.195221   11136 start.go:297] selected driver: qemu2
	I0318 13:57:32.195227   11136 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-255000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:32.195300   11136 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:57:32.197606   11136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:57:32.197651   11136 cni.go:84] Creating CNI manager for ""
	I0318 13:57:32.197659   11136 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 13:57:32.197686   11136 start.go:340] cluster config:
	{Name:old-k8s-version-255000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-255000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:32.202067   11136 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:32.210203   11136 out.go:177] * Starting "old-k8s-version-255000" primary control-plane node in "old-k8s-version-255000" cluster
	I0318 13:57:32.218170   11136 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 13:57:32.218184   11136 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 13:57:32.218191   11136 cache.go:56] Caching tarball of preloaded images
	I0318 13:57:32.218242   11136 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:57:32.218249   11136 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 13:57:32.218309   11136 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/old-k8s-version-255000/config.json ...
	I0318 13:57:32.218750   11136 start.go:360] acquireMachinesLock for old-k8s-version-255000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:32.218783   11136 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "old-k8s-version-255000"
	I0318 13:57:32.218794   11136 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:57:32.218798   11136 fix.go:54] fixHost starting: 
	I0318 13:57:32.218924   11136 fix.go:112] recreateIfNeeded on old-k8s-version-255000: state=Stopped err=<nil>
	W0318 13:57:32.218932   11136 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:57:32.223229   11136 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-255000" ...
	I0318 13:57:32.231211   11136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:d7:89:38:4c:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2
	I0318 13:57:32.233258   11136 main.go:141] libmachine: STDOUT: 
	I0318 13:57:32.233282   11136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:32.233309   11136 fix.go:56] duration metric: took 14.51025ms for fixHost
	I0318 13:57:32.233313   11136 start.go:83] releasing machines lock for "old-k8s-version-255000", held for 14.525208ms
	W0318 13:57:32.233321   11136 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:32.233360   11136 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:32.233366   11136 start.go:728] Will try again in 5 seconds ...
	I0318 13:57:37.234709   11136 start.go:360] acquireMachinesLock for old-k8s-version-255000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:37.235110   11136 start.go:364] duration metric: took 292.334µs to acquireMachinesLock for "old-k8s-version-255000"
	I0318 13:57:37.235181   11136 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:57:37.235201   11136 fix.go:54] fixHost starting: 
	I0318 13:57:37.235904   11136 fix.go:112] recreateIfNeeded on old-k8s-version-255000: state=Stopped err=<nil>
	W0318 13:57:37.235933   11136 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:57:37.240588   11136 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-255000" ...
	I0318 13:57:37.244791   11136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:d7:89:38:4c:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/old-k8s-version-255000/disk.qcow2
	I0318 13:57:37.254599   11136 main.go:141] libmachine: STDOUT: 
	I0318 13:57:37.254655   11136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:37.254713   11136 fix.go:56] duration metric: took 19.515208ms for fixHost
	I0318 13:57:37.254728   11136 start.go:83] releasing machines lock for "old-k8s-version-255000", held for 19.596584ms
	W0318 13:57:37.254930   11136 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-255000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-255000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:37.263484   11136 out.go:177] 
	W0318 13:57:37.267542   11136 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:37.267578   11136 out.go:239] * 
	* 
	W0318 13:57:37.270384   11136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:37.278426   11136 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (69.413708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-205000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-205000 create -f testdata/busybox.yaml: exit status 1 (29.797958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-205000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-205000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (31.080791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (30.359791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-205000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-205000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-205000 describe deploy/metrics-server -n kube-system: exit status 1 (26.749583ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-205000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-205000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (30.598667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-255000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (33.600875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-255000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-255000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-255000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.966042ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-255000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-255000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (31.17775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-255000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (30.759958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-255000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-255000 --alsologtostderr -v=1: exit status 83 (43.5845ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-255000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-255000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:37.557990   11187 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:37.558384   11187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:37.558387   11187 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:37.558390   11187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:37.558550   11187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:37.558757   11187 out.go:298] Setting JSON to false
	I0318 13:57:37.558766   11187 mustload.go:65] Loading cluster: old-k8s-version-255000
	I0318 13:57:37.558962   11187 config.go:182] Loaded profile config "old-k8s-version-255000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 13:57:37.563423   11187 out.go:177] * The control-plane node old-k8s-version-255000 host is not running: state=Stopped
	I0318 13:57:37.567397   11187 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-255000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-255000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (30.668417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (30.992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (10.1326015s)

                                                
                                                
-- stdout --
	* [embed-certs-142000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-142000" primary control-plane node in "embed-certs-142000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-142000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:38.046931   11211 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:38.047055   11211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:38.047058   11211 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:38.047060   11211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:38.047172   11211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:38.048221   11211 out.go:298] Setting JSON to false
	I0318 13:57:38.064232   11211 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7030,"bootTime":1710788428,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:57:38.064296   11211 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:57:38.076082   11211 out.go:177] * [embed-certs-142000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:57:38.082037   11211 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:57:38.079130   11211 notify.go:220] Checking for updates...
	I0318 13:57:38.089046   11211 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:57:38.097073   11211 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:57:38.105020   11211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:57:38.112012   11211 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:57:38.118966   11211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:57:38.122372   11211 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:38.122452   11211 config.go:182] Loaded profile config "no-preload-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 13:57:38.122502   11211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:57:38.126029   11211 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:57:38.132050   11211 start.go:297] selected driver: qemu2
	I0318 13:57:38.132060   11211 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:57:38.132068   11211 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:57:38.134498   11211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:57:38.137014   11211 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:57:38.138165   11211 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:57:38.138202   11211 cni.go:84] Creating CNI manager for ""
	I0318 13:57:38.138210   11211 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:57:38.138220   11211 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:57:38.138252   11211 start.go:340] cluster config:
	{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:38.143455   11211 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.152161   11211 out.go:177] * Starting "embed-certs-142000" primary control-plane node in "embed-certs-142000" cluster
	I0318 13:57:38.155020   11211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:57:38.155043   11211 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:57:38.155059   11211 cache.go:56] Caching tarball of preloaded images
	I0318 13:57:38.155162   11211 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:57:38.155168   11211 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:57:38.155240   11211 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/embed-certs-142000/config.json ...
	I0318 13:57:38.155250   11211 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/embed-certs-142000/config.json: {Name:mkb57cc0e8e76d7f68885c6e31b1d54aaddcf68a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:57:38.155453   11211 start.go:360] acquireMachinesLock for embed-certs-142000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:38.155487   11211 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-142000"
	I0318 13:57:38.155497   11211 start.go:93] Provisioning new machine with config: &{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:38.155524   11211 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:38.164017   11211 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:57:38.180021   11211 start.go:159] libmachine.API.Create for "embed-certs-142000" (driver="qemu2")
	I0318 13:57:38.180054   11211 client.go:168] LocalClient.Create starting
	I0318 13:57:38.180120   11211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:38.180150   11211 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:38.180160   11211 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:38.180207   11211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:38.180228   11211 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:38.180233   11211 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:38.180606   11211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:38.449769   11211 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:38.598863   11211 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:38.598869   11211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:38.599053   11211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2
	I0318 13:57:38.612307   11211 main.go:141] libmachine: STDOUT: 
	I0318 13:57:38.612330   11211 main.go:141] libmachine: STDERR: 
	I0318 13:57:38.612391   11211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2 +20000M
	I0318 13:57:38.623490   11211 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:38.623510   11211 main.go:141] libmachine: STDERR: 
	I0318 13:57:38.623527   11211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2
	I0318 13:57:38.623531   11211 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:38.623576   11211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:9e:cb:2b:d4:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2
	I0318 13:57:38.625426   11211 main.go:141] libmachine: STDOUT: 
	I0318 13:57:38.625443   11211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:38.625462   11211 client.go:171] duration metric: took 445.405458ms to LocalClient.Create
	I0318 13:57:40.627389   11211 start.go:128] duration metric: took 2.471854625s to createHost
	I0318 13:57:40.627540   11211 start.go:83] releasing machines lock for "embed-certs-142000", held for 2.471992834s
	W0318 13:57:40.627608   11211 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:40.643073   11211 out.go:177] * Deleting "embed-certs-142000" in qemu2 ...
	W0318 13:57:40.673439   11211 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:40.673470   11211 start.go:728] Will try again in 5 seconds ...
	I0318 13:57:45.675581   11211 start.go:360] acquireMachinesLock for embed-certs-142000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:45.686033   11211 start.go:364] duration metric: took 10.376417ms to acquireMachinesLock for "embed-certs-142000"
	I0318 13:57:45.686098   11211 start.go:93] Provisioning new machine with config: &{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:45.686307   11211 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:45.694867   11211 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:57:45.740971   11211 start.go:159] libmachine.API.Create for "embed-certs-142000" (driver="qemu2")
	I0318 13:57:45.741045   11211 client.go:168] LocalClient.Create starting
	I0318 13:57:45.741194   11211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:45.741266   11211 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:45.741281   11211 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:45.741336   11211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:45.741379   11211 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:45.741392   11211 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:45.741904   11211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:45.894160   11211 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:46.068272   11211 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:46.068281   11211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:46.068460   11211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2
	I0318 13:57:46.081341   11211 main.go:141] libmachine: STDOUT: 
	I0318 13:57:46.081488   11211 main.go:141] libmachine: STDERR: 
	I0318 13:57:46.081551   11211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2 +20000M
	I0318 13:57:46.094263   11211 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:46.094331   11211 main.go:141] libmachine: STDERR: 
	I0318 13:57:46.094343   11211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2
	I0318 13:57:46.094349   11211 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:46.094399   11211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4e:d0:d5:db:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2
	I0318 13:57:46.096297   11211 main.go:141] libmachine: STDOUT: 
	I0318 13:57:46.096310   11211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:46.096328   11211 client.go:171] duration metric: took 355.260459ms to LocalClient.Create
	I0318 13:57:48.098488   11211 start.go:128] duration metric: took 2.412166459s to createHost
	I0318 13:57:48.098542   11211 start.go:83] releasing machines lock for "embed-certs-142000", held for 2.412488209s
	W0318 13:57:48.098792   11211 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-142000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-142000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:48.118591   11211 out.go:177] 
	W0318 13:57:48.124590   11211 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:48.124621   11211 out.go:239] * 
	* 
	W0318 13:57:48.127430   11211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:48.136450   11211 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (47.72325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-205000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-205000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (7.502097542s)

                                                
                                                
-- stdout --
	* [no-preload-205000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-205000" primary control-plane node in "no-preload-205000" cluster
	* Restarting existing qemu2 VM for "no-preload-205000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-205000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:38.258851   11223 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:38.258995   11223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:38.259002   11223 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:38.259005   11223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:38.259137   11223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:38.260412   11223 out.go:298] Setting JSON to false
	I0318 13:57:38.279369   11223 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7030,"bootTime":1710788428,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:57:38.279444   11223 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:57:38.287129   11223 out.go:177] * [no-preload-205000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:57:38.301078   11223 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:57:38.296170   11223 notify.go:220] Checking for updates...
	I0318 13:57:38.308059   11223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:57:38.315086   11223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:57:38.323879   11223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:57:38.331078   11223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:57:38.338066   11223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:57:38.341426   11223 config.go:182] Loaded profile config "no-preload-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 13:57:38.341723   11223 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:57:38.345985   11223 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:57:38.351596   11223 start.go:297] selected driver: qemu2
	I0318 13:57:38.351602   11223 start.go:901] validating driver "qemu2" against &{Name:no-preload-205000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:38.351694   11223 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:57:38.354522   11223 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:57:38.354587   11223 cni.go:84] Creating CNI manager for ""
	I0318 13:57:38.354595   11223 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:57:38.354614   11223 start.go:340] cluster config:
	{Name:no-preload-205000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-205000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:38.359736   11223 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.369212   11223 out.go:177] * Starting "no-preload-205000" primary control-plane node in "no-preload-205000" cluster
	I0318 13:57:38.373090   11223 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 13:57:38.373225   11223 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/no-preload-205000/config.json ...
	I0318 13:57:38.373255   11223 cache.go:107] acquiring lock: {Name:mk189d694ac9f9bf1008521ce7d7ba734fe35b8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.373292   11223 cache.go:107] acquiring lock: {Name:mk398330268077967ffce2dc9a8d62348f59afaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.373328   11223 cache.go:115] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 13:57:38.373335   11223 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.792µs
	I0318 13:57:38.373341   11223 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 13:57:38.373348   11223 cache.go:107] acquiring lock: {Name:mke2593113ffc161eee833113097f92d52b8af4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.373359   11223 cache.go:115] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 13:57:38.373364   11223 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 109.625µs
	I0318 13:57:38.373369   11223 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 13:57:38.373362   11223 cache.go:107] acquiring lock: {Name:mk58b946f4193488078a1c9bd02b6cf2cfadc68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.373380   11223 cache.go:107] acquiring lock: {Name:mkf8c28ddf89cdfa144a055453b897f79301e1ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.373392   11223 cache.go:107] acquiring lock: {Name:mk2dadb96952744d9d91b9cac79cdedf004695d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.373385   11223 cache.go:115] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 13:57:38.373410   11223 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 62.625µs
	I0318 13:57:38.373414   11223 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 13:57:38.373418   11223 cache.go:115] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0318 13:57:38.373422   11223 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 42.875µs
	I0318 13:57:38.373417   11223 cache.go:107] acquiring lock: {Name:mk87a912866bad82ed77fec052abc1ee549cd109 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.373382   11223 cache.go:107] acquiring lock: {Name:mk31ec1f5f58bb2c039516376fd89df94e487cc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:38.373480   11223 cache.go:115] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 13:57:38.373486   11223 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 81.417µs
	I0318 13:57:38.373490   11223 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 13:57:38.373431   11223 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0318 13:57:38.373440   11223 cache.go:115] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 13:57:38.373500   11223 cache.go:115] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 13:57:38.373503   11223 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 112.417µs
	I0318 13:57:38.373506   11223 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 13:57:38.373505   11223 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 172.958µs
	I0318 13:57:38.373510   11223 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 13:57:38.373444   11223 cache.go:115] /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 13:57:38.373517   11223 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 179.542µs
	I0318 13:57:38.373524   11223 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 13:57:38.373530   11223 cache.go:87] Successfully saved all images to host disk.
	I0318 13:57:38.373707   11223 start.go:360] acquireMachinesLock for no-preload-205000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:40.627690   11223 start.go:364] duration metric: took 2.253969458s to acquireMachinesLock for "no-preload-205000"
	I0318 13:57:40.627796   11223 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:57:40.627834   11223 fix.go:54] fixHost starting: 
	I0318 13:57:40.628461   11223 fix.go:112] recreateIfNeeded on no-preload-205000: state=Stopped err=<nil>
	W0318 13:57:40.628513   11223 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:57:40.636922   11223 out.go:177] * Restarting existing qemu2 VM for "no-preload-205000" ...
	I0318 13:57:40.647393   11223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:0f:a2:e1:9b:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2
	I0318 13:57:40.658177   11223 main.go:141] libmachine: STDOUT: 
	I0318 13:57:40.658249   11223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:40.658357   11223 fix.go:56] duration metric: took 30.538334ms for fixHost
	I0318 13:57:40.658374   11223 start.go:83] releasing machines lock for "no-preload-205000", held for 30.624125ms
	W0318 13:57:40.658412   11223 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:40.658579   11223 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:40.658595   11223 start.go:728] Will try again in 5 seconds ...
	I0318 13:57:45.660858   11223 start.go:360] acquireMachinesLock for no-preload-205000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:45.661241   11223 start.go:364] duration metric: took 284.125µs to acquireMachinesLock for "no-preload-205000"
	I0318 13:57:45.661390   11223 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:57:45.661415   11223 fix.go:54] fixHost starting: 
	I0318 13:57:45.662183   11223 fix.go:112] recreateIfNeeded on no-preload-205000: state=Stopped err=<nil>
	W0318 13:57:45.662209   11223 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:57:45.667878   11223 out.go:177] * Restarting existing qemu2 VM for "no-preload-205000" ...
	I0318 13:57:45.676070   11223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:0f:a2:e1:9b:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/no-preload-205000/disk.qcow2
	I0318 13:57:45.685733   11223 main.go:141] libmachine: STDOUT: 
	I0318 13:57:45.685829   11223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:45.685933   11223 fix.go:56] duration metric: took 24.517458ms for fixHost
	I0318 13:57:45.685953   11223 start.go:83] releasing machines lock for "no-preload-205000", held for 24.688125ms
	W0318 13:57:45.686214   11223 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-205000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-205000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:45.702882   11223 out.go:177] 
	W0318 13:57:45.706733   11223 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:45.706785   11223 out.go:239] * 
	* 
	W0318 13:57:45.708505   11223 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:45.718814   11223 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-205000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (50.21725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-205000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (36.116167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-205000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-205000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-205000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.869875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-205000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-205000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (36.010584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-205000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (31.356791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-205000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-205000 --alsologtostderr -v=1: exit status 83 (44.282166ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-205000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-205000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:45.991871   11248 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:45.992075   11248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:45.992078   11248 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:45.992080   11248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:45.992222   11248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:45.992452   11248 out.go:298] Setting JSON to false
	I0318 13:57:45.992461   11248 mustload.go:65] Loading cluster: no-preload-205000
	I0318 13:57:45.992649   11248 config.go:182] Loaded profile config "no-preload-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 13:57:45.996806   11248 out.go:177] * The control-plane node no-preload-205000 host is not running: state=Stopped
	I0318 13:57:45.999831   11248 out.go:177]   To start a cluster, run: "minikube start -p no-preload-205000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-205000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (31.880292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (32.83125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (11.201853792s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-349000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-349000" primary control-plane node in "default-k8s-diff-port-349000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-349000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:46.707906   11286 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:46.708034   11286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:46.708039   11286 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:46.708042   11286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:46.708163   11286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:46.709191   11286 out.go:298] Setting JSON to false
	I0318 13:57:46.725339   11286 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7038,"bootTime":1710788428,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:57:46.725400   11286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:57:46.730169   11286 out.go:177] * [default-k8s-diff-port-349000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:57:46.737073   11286 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:57:46.737139   11286 notify.go:220] Checking for updates...
	I0318 13:57:46.741122   11286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:57:46.745037   11286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:57:46.748043   11286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:57:46.751119   11286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:57:46.753990   11286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:57:46.757481   11286 config.go:182] Loaded profile config "embed-certs-142000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:46.757563   11286 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:46.757610   11286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:57:46.762061   11286 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:57:46.769074   11286 start.go:297] selected driver: qemu2
	I0318 13:57:46.769079   11286 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:57:46.769084   11286 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:57:46.771214   11286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:57:46.774090   11286 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:57:46.775652   11286 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:57:46.775693   11286 cni.go:84] Creating CNI manager for ""
	I0318 13:57:46.775701   11286 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:57:46.775709   11286 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:57:46.775740   11286 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:46.780135   11286 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:46.787109   11286 out.go:177] * Starting "default-k8s-diff-port-349000" primary control-plane node in "default-k8s-diff-port-349000" cluster
	I0318 13:57:46.791017   11286 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:57:46.791030   11286 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:57:46.791034   11286 cache.go:56] Caching tarball of preloaded images
	I0318 13:57:46.791089   11286 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:57:46.791094   11286 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:57:46.791156   11286 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/default-k8s-diff-port-349000/config.json ...
	I0318 13:57:46.791169   11286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/default-k8s-diff-port-349000/config.json: {Name:mk1124e791ef55e43701f30b02bba4806e691a5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:57:46.791418   11286 start.go:360] acquireMachinesLock for default-k8s-diff-port-349000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:48.098731   11286 start.go:364] duration metric: took 1.307206083s to acquireMachinesLock for "default-k8s-diff-port-349000"
	I0318 13:57:48.098940   11286 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:48.099183   11286 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:48.118581   11286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:57:48.158985   11286 start.go:159] libmachine.API.Create for "default-k8s-diff-port-349000" (driver="qemu2")
	I0318 13:57:48.159021   11286 client.go:168] LocalClient.Create starting
	I0318 13:57:48.159106   11286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:48.159150   11286 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:48.159170   11286 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:48.159231   11286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:48.159264   11286 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:48.159276   11286 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:48.159727   11286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:48.319895   11286 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:48.405293   11286 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:48.405304   11286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:48.405501   11286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0318 13:57:48.420282   11286 main.go:141] libmachine: STDOUT: 
	I0318 13:57:48.420316   11286 main.go:141] libmachine: STDERR: 
	I0318 13:57:48.420431   11286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2 +20000M
	I0318 13:57:48.432554   11286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:48.432579   11286 main.go:141] libmachine: STDERR: 
	I0318 13:57:48.432596   11286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0318 13:57:48.432602   11286 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:48.432643   11286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6b:9f:e3:55:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0318 13:57:48.434581   11286 main.go:141] libmachine: STDOUT: 
	I0318 13:57:48.434601   11286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:48.434621   11286 client.go:171] duration metric: took 275.596ms to LocalClient.Create
	I0318 13:57:50.436794   11286 start.go:128] duration metric: took 2.337561834s to createHost
	I0318 13:57:50.436879   11286 start.go:83] releasing machines lock for "default-k8s-diff-port-349000", held for 2.338109458s
	W0318 13:57:50.436989   11286 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:50.449362   11286 out.go:177] * Deleting "default-k8s-diff-port-349000" in qemu2 ...
	W0318 13:57:50.479507   11286 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:50.479549   11286 start.go:728] Will try again in 5 seconds ...
	I0318 13:57:55.481718   11286 start.go:360] acquireMachinesLock for default-k8s-diff-port-349000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:55.482073   11286 start.go:364] duration metric: took 264.5µs to acquireMachinesLock for "default-k8s-diff-port-349000"
	I0318 13:57:55.482180   11286 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:55.482503   11286 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:55.488173   11286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:57:55.538109   11286 start.go:159] libmachine.API.Create for "default-k8s-diff-port-349000" (driver="qemu2")
	I0318 13:57:55.538160   11286 client.go:168] LocalClient.Create starting
	I0318 13:57:55.538279   11286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:55.538342   11286 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:55.538362   11286 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:55.538422   11286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:55.538466   11286 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:55.538478   11286 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:55.538978   11286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:55.691870   11286 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:55.797472   11286 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:55.797478   11286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:55.797648   11286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0318 13:57:55.810371   11286 main.go:141] libmachine: STDOUT: 
	I0318 13:57:55.810392   11286 main.go:141] libmachine: STDERR: 
	I0318 13:57:55.810447   11286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2 +20000M
	I0318 13:57:55.821090   11286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:55.821107   11286 main.go:141] libmachine: STDERR: 
	I0318 13:57:55.821119   11286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0318 13:57:55.821125   11286 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:55.821164   11286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:24:88:88:d1:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0318 13:57:55.822917   11286 main.go:141] libmachine: STDOUT: 
	I0318 13:57:55.822946   11286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:55.822959   11286 client.go:171] duration metric: took 284.794084ms to LocalClient.Create
	I0318 13:57:57.825296   11286 start.go:128] duration metric: took 2.342748958s to createHost
	I0318 13:57:57.825364   11286 start.go:83] releasing machines lock for "default-k8s-diff-port-349000", held for 2.343281208s
	W0318 13:57:57.825667   11286 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:57.837131   11286 out.go:177] 
	W0318 13:57:57.844180   11286 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:57.844258   11286 out.go:239] * 
	* 
	W0318 13:57:57.847427   11286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:57.856112   11286 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (70.449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-142000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-142000 create -f testdata/busybox.yaml: exit status 1 (31.450584ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-142000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-142000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (36.403667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (36.441084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-142000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-142000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-142000 describe deploy/metrics-server -n kube-system: exit status 1 (28.209666ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-142000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-142000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (31.61375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.857954125s)

                                                
                                                
-- stdout --
	* [embed-certs-142000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-142000" primary control-plane node in "embed-certs-142000" cluster
	* Restarting existing qemu2 VM for "embed-certs-142000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-142000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:52.068320   11331 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:52.068694   11331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:52.068699   11331 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:52.068702   11331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:52.068886   11331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:52.070346   11331 out.go:298] Setting JSON to false
	I0318 13:57:52.086691   11331 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7044,"bootTime":1710788428,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:57:52.086750   11331 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:57:52.091744   11331 out.go:177] * [embed-certs-142000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:57:52.098746   11331 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:57:52.102707   11331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:57:52.098792   11331 notify.go:220] Checking for updates...
	I0318 13:57:52.110713   11331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:57:52.113750   11331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:57:52.116745   11331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:57:52.119762   11331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:57:52.123059   11331 config.go:182] Loaded profile config "embed-certs-142000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:52.123329   11331 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:57:52.127750   11331 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:57:52.134716   11331 start.go:297] selected driver: qemu2
	I0318 13:57:52.134722   11331 start.go:901] validating driver "qemu2" against &{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:52.134794   11331 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:57:52.137217   11331 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:57:52.137261   11331 cni.go:84] Creating CNI manager for ""
	I0318 13:57:52.137268   11331 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:57:52.137292   11331 start.go:340] cluster config:
	{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-142000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:52.141692   11331 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:52.148680   11331 out.go:177] * Starting "embed-certs-142000" primary control-plane node in "embed-certs-142000" cluster
	I0318 13:57:52.152719   11331 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:57:52.152733   11331 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:57:52.152739   11331 cache.go:56] Caching tarball of preloaded images
	I0318 13:57:52.152792   11331 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:57:52.152798   11331 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:57:52.152860   11331 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/embed-certs-142000/config.json ...
	I0318 13:57:52.153306   11331 start.go:360] acquireMachinesLock for embed-certs-142000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:52.153341   11331 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "embed-certs-142000"
	I0318 13:57:52.153351   11331 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:57:52.153356   11331 fix.go:54] fixHost starting: 
	I0318 13:57:52.153484   11331 fix.go:112] recreateIfNeeded on embed-certs-142000: state=Stopped err=<nil>
	W0318 13:57:52.153493   11331 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:57:52.157770   11331 out.go:177] * Restarting existing qemu2 VM for "embed-certs-142000" ...
	I0318 13:57:52.164738   11331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4e:d0:d5:db:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2
	I0318 13:57:52.166996   11331 main.go:141] libmachine: STDOUT: 
	I0318 13:57:52.167019   11331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:52.167051   11331 fix.go:56] duration metric: took 13.693792ms for fixHost
	I0318 13:57:52.167056   11331 start.go:83] releasing machines lock for "embed-certs-142000", held for 13.70975ms
	W0318 13:57:52.167064   11331 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:52.167114   11331 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:52.167119   11331 start.go:728] Will try again in 5 seconds ...
	I0318 13:57:57.169255   11331 start.go:360] acquireMachinesLock for embed-certs-142000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:57.825546   11331 start.go:364] duration metric: took 656.117292ms to acquireMachinesLock for "embed-certs-142000"
	I0318 13:57:57.825733   11331 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:57:57.825750   11331 fix.go:54] fixHost starting: 
	I0318 13:57:57.826524   11331 fix.go:112] recreateIfNeeded on embed-certs-142000: state=Stopped err=<nil>
	W0318 13:57:57.826550   11331 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:57:57.837131   11331 out.go:177] * Restarting existing qemu2 VM for "embed-certs-142000" ...
	I0318 13:57:57.844436   11331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4e:d0:d5:db:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/embed-certs-142000/disk.qcow2
	I0318 13:57:57.854255   11331 main.go:141] libmachine: STDOUT: 
	I0318 13:57:57.854342   11331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:57.854448   11331 fix.go:56] duration metric: took 28.699708ms for fixHost
	I0318 13:57:57.854477   11331 start.go:83] releasing machines lock for "embed-certs-142000", held for 28.866ms
	W0318 13:57:57.854662   11331 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-142000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-142000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:57:57.867091   11331 out.go:177] 
	W0318 13:57:57.871243   11331 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:57:57.871290   11331 out.go:239] * 
	* 
	W0318 13:57:57.874175   11331 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:57.884183   11331 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (59.238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-349000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-349000 create -f testdata/busybox.yaml: exit status 1 (31.734042ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-349000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-349000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (32.620292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (35.97775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-142000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (35.985125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-142000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-142000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-142000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.551125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-142000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-142000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (32.849917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-349000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-349000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-349000 describe deploy/metrics-server -n kube-system: exit status 1 (28.411458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-349000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-349000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (39.535125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-142000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (32.734959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-142000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-142000 --alsologtostderr -v=1: exit status 83 (53.830291ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-142000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-142000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:58.174104   11366 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:58.174233   11366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:58.174237   11366 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:58.174239   11366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:58.174358   11366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:58.174580   11366 out.go:298] Setting JSON to false
	I0318 13:57:58.174589   11366 mustload.go:65] Loading cluster: embed-certs-142000
	I0318 13:57:58.174788   11366 config.go:182] Loaded profile config "embed-certs-142000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:58.181056   11366 out.go:177] * The control-plane node embed-certs-142000 host is not running: state=Stopped
	I0318 13:57:58.188019   11366 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-142000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-142000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (32.903625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (29.91425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-396000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-396000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.794930709s)

                                                
                                                
-- stdout --
	* [newest-cni-396000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-396000" primary control-plane node in "newest-cni-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:57:58.646692   11396 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:57:58.646826   11396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:58.646829   11396 out.go:304] Setting ErrFile to fd 2...
	I0318 13:57:58.646832   11396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:57:58.646953   11396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:57:58.648114   11396 out.go:298] Setting JSON to false
	I0318 13:57:58.664382   11396 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7050,"bootTime":1710788428,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:57:58.664447   11396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:57:58.669365   11396 out.go:177] * [newest-cni-396000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:57:58.676317   11396 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:57:58.679271   11396 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:57:58.676325   11396 notify.go:220] Checking for updates...
	I0318 13:57:58.686219   11396 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:57:58.689310   11396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:57:58.692256   11396 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:57:58.695224   11396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:57:58.698557   11396 config.go:182] Loaded profile config "default-k8s-diff-port-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:58.698625   11396 config.go:182] Loaded profile config "multinode-685000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:57:58.698677   11396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:57:58.703314   11396 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 13:57:58.710232   11396 start.go:297] selected driver: qemu2
	I0318 13:57:58.710237   11396 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:57:58.710243   11396 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:57:58.712418   11396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0318 13:57:58.712444   11396 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0318 13:57:58.720273   11396 out.go:177] * Automatically selected the socket_vmnet network
	I0318 13:57:58.723338   11396 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 13:57:58.723374   11396 cni.go:84] Creating CNI manager for ""
	I0318 13:57:58.723381   11396 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:57:58.723386   11396 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:57:58.723411   11396 start.go:340] cluster config:
	{Name:newest-cni-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:57:58.728133   11396 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:57:58.733266   11396 out.go:177] * Starting "newest-cni-396000" primary control-plane node in "newest-cni-396000" cluster
	I0318 13:57:58.737275   11396 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 13:57:58.737294   11396 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 13:57:58.737303   11396 cache.go:56] Caching tarball of preloaded images
	I0318 13:57:58.737356   11396 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:57:58.737362   11396 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 13:57:58.737436   11396 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/newest-cni-396000/config.json ...
	I0318 13:57:58.737449   11396 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/newest-cni-396000/config.json: {Name:mk33ce0632fa26690b9b79a27df10688e60bed22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:57:58.737682   11396 start.go:360] acquireMachinesLock for newest-cni-396000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:57:58.737715   11396 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "newest-cni-396000"
	I0318 13:57:58.737730   11396 start.go:93] Provisioning new machine with config: &{Name:newest-cni-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:57:58.737759   11396 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:57:58.746299   11396 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:57:58.763986   11396 start.go:159] libmachine.API.Create for "newest-cni-396000" (driver="qemu2")
	I0318 13:57:58.764027   11396 client.go:168] LocalClient.Create starting
	I0318 13:57:58.764091   11396 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:57:58.764122   11396 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:58.764132   11396 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:58.764174   11396 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:57:58.764196   11396 main.go:141] libmachine: Decoding PEM data...
	I0318 13:57:58.764206   11396 main.go:141] libmachine: Parsing certificate...
	I0318 13:57:58.764553   11396 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:57:58.905999   11396 main.go:141] libmachine: Creating SSH key...
	I0318 13:57:58.947130   11396 main.go:141] libmachine: Creating Disk image...
	I0318 13:57:58.947135   11396 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:57:58.947309   11396 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2
	I0318 13:57:58.959600   11396 main.go:141] libmachine: STDOUT: 
	I0318 13:57:58.959629   11396 main.go:141] libmachine: STDERR: 
	I0318 13:57:58.959689   11396 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2 +20000M
	I0318 13:57:58.970424   11396 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:57:58.970449   11396 main.go:141] libmachine: STDERR: 
	I0318 13:57:58.970465   11396 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2
	I0318 13:57:58.970469   11396 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:57:58.970501   11396 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:d9:c0:54:42:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2
	I0318 13:57:58.972224   11396 main.go:141] libmachine: STDOUT: 
	I0318 13:57:58.972244   11396 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:57:58.972261   11396 client.go:171] duration metric: took 208.23275ms to LocalClient.Create
	I0318 13:58:00.974466   11396 start.go:128] duration metric: took 2.23669175s to createHost
	I0318 13:58:00.974538   11396 start.go:83] releasing machines lock for "newest-cni-396000", held for 2.236823875s
	W0318 13:58:00.974609   11396 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:58:00.985661   11396 out.go:177] * Deleting "newest-cni-396000" in qemu2 ...
	W0318 13:58:01.016411   11396 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:58:01.016440   11396 start.go:728] Will try again in 5 seconds ...
	I0318 13:58:06.018572   11396 start.go:360] acquireMachinesLock for newest-cni-396000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:58:06.018982   11396 start.go:364] duration metric: took 319.666µs to acquireMachinesLock for "newest-cni-396000"
	I0318 13:58:06.019102   11396 start.go:93] Provisioning new machine with config: &{Name:newest-cni-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:58:06.019487   11396 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 13:58:06.029066   11396 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:58:06.081645   11396 start.go:159] libmachine.API.Create for "newest-cni-396000" (driver="qemu2")
	I0318 13:58:06.081697   11396 client.go:168] LocalClient.Create starting
	I0318 13:58:06.081800   11396 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/ca.pem
	I0318 13:58:06.081882   11396 main.go:141] libmachine: Decoding PEM data...
	I0318 13:58:06.081899   11396 main.go:141] libmachine: Parsing certificate...
	I0318 13:58:06.081958   11396 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18421-6777/.minikube/certs/cert.pem
	I0318 13:58:06.082000   11396 main.go:141] libmachine: Decoding PEM data...
	I0318 13:58:06.082018   11396 main.go:141] libmachine: Parsing certificate...
	I0318 13:58:06.082523   11396 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0318 13:58:06.233598   11396 main.go:141] libmachine: Creating SSH key...
	I0318 13:58:06.324095   11396 main.go:141] libmachine: Creating Disk image...
	I0318 13:58:06.324103   11396 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 13:58:06.324282   11396 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2
	I0318 13:58:06.336540   11396 main.go:141] libmachine: STDOUT: 
	I0318 13:58:06.336583   11396 main.go:141] libmachine: STDERR: 
	I0318 13:58:06.336637   11396 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2 +20000M
	I0318 13:58:06.347333   11396 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 13:58:06.347364   11396 main.go:141] libmachine: STDERR: 
	I0318 13:58:06.347375   11396 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2
	I0318 13:58:06.347379   11396 main.go:141] libmachine: Starting QEMU VM...
	I0318 13:58:06.347414   11396 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:14:0e:4f:ba:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2
	I0318 13:58:06.349102   11396 main.go:141] libmachine: STDOUT: 
	I0318 13:58:06.349124   11396 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:58:06.349136   11396 client.go:171] duration metric: took 267.434917ms to LocalClient.Create
	I0318 13:58:08.351304   11396 start.go:128] duration metric: took 2.331777583s to createHost
	I0318 13:58:08.351369   11396 start.go:83] releasing machines lock for "newest-cni-396000", held for 2.332377333s
	W0318 13:58:08.351758   11396 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:58:08.368343   11396 out.go:177] 
	W0318 13:58:08.376463   11396 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:58:08.376492   11396 out.go:239] * 
	* 
	W0318 13:58:08.379309   11396 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:58:08.391347   11396 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-396000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000: exit status 7 (67.340625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-396000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (6.615610625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-349000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-349000" primary control-plane node in "default-k8s-diff-port-349000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-349000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-349000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:58:01.849650   11425 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:58:01.849769   11425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:58:01.849773   11425 out.go:304] Setting ErrFile to fd 2...
	I0318 13:58:01.849775   11425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:58:01.849901   11425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:58:01.850919   11425 out.go:298] Setting JSON to false
	I0318 13:58:01.867110   11425 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7053,"bootTime":1710788428,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:58:01.867172   11425 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:58:01.872266   11425 out.go:177] * [default-k8s-diff-port-349000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:58:01.879180   11425 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:58:01.882132   11425 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:58:01.879238   11425 notify.go:220] Checking for updates...
	I0318 13:58:01.888197   11425 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:58:01.889563   11425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:58:01.892133   11425 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:58:01.895178   11425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:58:01.898503   11425 config.go:182] Loaded profile config "default-k8s-diff-port-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:58:01.898801   11425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:58:01.903176   11425 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:58:01.910179   11425 start.go:297] selected driver: qemu2
	I0318 13:58:01.910186   11425 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:58:01.910264   11425 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:58:01.912565   11425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:58:01.912611   11425 cni.go:84] Creating CNI manager for ""
	I0318 13:58:01.912618   11425 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:58:01.912648   11425 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-349000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:58:01.916935   11425 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:58:01.924214   11425 out.go:177] * Starting "default-k8s-diff-port-349000" primary control-plane node in "default-k8s-diff-port-349000" cluster
	I0318 13:58:01.928139   11425 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:58:01.928155   11425 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:58:01.928164   11425 cache.go:56] Caching tarball of preloaded images
	I0318 13:58:01.928240   11425 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:58:01.928246   11425 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:58:01.928310   11425 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/default-k8s-diff-port-349000/config.json ...
	I0318 13:58:01.928750   11425 start.go:360] acquireMachinesLock for default-k8s-diff-port-349000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:58:01.928781   11425 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "default-k8s-diff-port-349000"
	I0318 13:58:01.928790   11425 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:58:01.928794   11425 fix.go:54] fixHost starting: 
	I0318 13:58:01.928907   11425 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349000: state=Stopped err=<nil>
	W0318 13:58:01.928915   11425 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:58:01.933187   11425 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-349000" ...
	I0318 13:58:01.941152   11425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:24:88:88:d1:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0318 13:58:01.943188   11425 main.go:141] libmachine: STDOUT: 
	I0318 13:58:01.943209   11425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:58:01.943238   11425 fix.go:56] duration metric: took 14.442541ms for fixHost
	I0318 13:58:01.943242   11425 start.go:83] releasing machines lock for "default-k8s-diff-port-349000", held for 14.457333ms
	W0318 13:58:01.943251   11425 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:58:01.943283   11425 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:58:01.943288   11425 start.go:728] Will try again in 5 seconds ...
	I0318 13:58:06.945496   11425 start.go:360] acquireMachinesLock for default-k8s-diff-port-349000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:58:08.351591   11425 start.go:364] duration metric: took 1.405965583s to acquireMachinesLock for "default-k8s-diff-port-349000"
	I0318 13:58:08.351754   11425 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:58:08.351779   11425 fix.go:54] fixHost starting: 
	I0318 13:58:08.352467   11425 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349000: state=Stopped err=<nil>
	W0318 13:58:08.352497   11425 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:58:08.372334   11425 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-349000" ...
	I0318 13:58:08.380513   11425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:24:88:88:d1:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0318 13:58:08.390495   11425 main.go:141] libmachine: STDOUT: 
	I0318 13:58:08.390576   11425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:58:08.390672   11425 fix.go:56] duration metric: took 38.897542ms for fixHost
	I0318 13:58:08.390699   11425 start.go:83] releasing machines lock for "default-k8s-diff-port-349000", held for 39.06375ms
	W0318 13:58:08.390924   11425 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-349000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-349000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:58:08.403321   11425 out.go:177] 
	W0318 13:58:08.407271   11425 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:58:08.407337   11425 out.go:239] * 
	* 
	W0318 13:58:08.410294   11425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:58:08.422773   11425 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (52.544625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-349000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (39.736875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-349000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-349000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-349000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.451458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-349000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-349000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (37.236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-349000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (31.562667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-349000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-349000 --alsologtostderr -v=1: exit status 83 (43.08875ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-349000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-349000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:58:08.695433   11456 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:58:08.695580   11456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:58:08.695583   11456 out.go:304] Setting ErrFile to fd 2...
	I0318 13:58:08.695586   11456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:58:08.695714   11456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:58:08.695962   11456 out.go:298] Setting JSON to false
	I0318 13:58:08.695972   11456 mustload.go:65] Loading cluster: default-k8s-diff-port-349000
	I0318 13:58:08.696164   11456 config.go:182] Loaded profile config "default-k8s-diff-port-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:58:08.700271   11456 out.go:177] * The control-plane node default-k8s-diff-port-349000 host is not running: state=Stopped
	I0318 13:58:08.704346   11456 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-349000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-349000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (31.41725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (31.021708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-396000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-396000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.186975667s)

                                                
                                                
-- stdout --
	* [newest-cni-396000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-396000" primary control-plane node in "newest-cni-396000" cluster
	* Restarting existing qemu2 VM for "newest-cni-396000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-396000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:58:10.551059   11491 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:58:10.551175   11491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:58:10.551178   11491 out.go:304] Setting ErrFile to fd 2...
	I0318 13:58:10.551187   11491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:58:10.551321   11491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:58:10.552337   11491 out.go:298] Setting JSON to false
	I0318 13:58:10.568392   11491 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7062,"bootTime":1710788428,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:58:10.568444   11491 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:58:10.572268   11491 out.go:177] * [newest-cni-396000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:58:10.579312   11491 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:58:10.579361   11491 notify.go:220] Checking for updates...
	I0318 13:58:10.586205   11491 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:58:10.589235   11491 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:58:10.592185   11491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:58:10.595224   11491 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:58:10.598234   11491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:58:10.601445   11491 config.go:182] Loaded profile config "newest-cni-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 13:58:10.601731   11491 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:58:10.606187   11491 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:58:10.612175   11491 start.go:297] selected driver: qemu2
	I0318 13:58:10.612182   11491 start.go:901] validating driver "qemu2" against &{Name:newest-cni-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:58:10.612226   11491 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:58:10.614500   11491 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 13:58:10.614546   11491 cni.go:84] Creating CNI manager for ""
	I0318 13:58:10.614554   11491 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:58:10.614584   11491 start.go:340] cluster config:
	{Name:newest-cni-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-396000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:58:10.618979   11491 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:58:10.626305   11491 out.go:177] * Starting "newest-cni-396000" primary control-plane node in "newest-cni-396000" cluster
	I0318 13:58:10.630250   11491 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 13:58:10.630266   11491 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 13:58:10.630279   11491 cache.go:56] Caching tarball of preloaded images
	I0318 13:58:10.630339   11491 preload.go:173] Found /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 13:58:10.630345   11491 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 13:58:10.630413   11491 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/newest-cni-396000/config.json ...
	I0318 13:58:10.630865   11491 start.go:360] acquireMachinesLock for newest-cni-396000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:58:10.630897   11491 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "newest-cni-396000"
	I0318 13:58:10.630906   11491 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:58:10.630913   11491 fix.go:54] fixHost starting: 
	I0318 13:58:10.631029   11491 fix.go:112] recreateIfNeeded on newest-cni-396000: state=Stopped err=<nil>
	W0318 13:58:10.631037   11491 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:58:10.635070   11491 out.go:177] * Restarting existing qemu2 VM for "newest-cni-396000" ...
	I0318 13:58:10.643273   11491 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:14:0e:4f:ba:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2
	I0318 13:58:10.645317   11491 main.go:141] libmachine: STDOUT: 
	I0318 13:58:10.645338   11491 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:58:10.645369   11491 fix.go:56] duration metric: took 14.455791ms for fixHost
	I0318 13:58:10.645374   11491 start.go:83] releasing machines lock for "newest-cni-396000", held for 14.472584ms
	W0318 13:58:10.645381   11491 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:58:10.645412   11491 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:58:10.645417   11491 start.go:728] Will try again in 5 seconds ...
	I0318 13:58:15.647597   11491 start.go:360] acquireMachinesLock for newest-cni-396000: {Name:mkd4856b9c1f6370b85aac22adfebe6c39e0ed82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:58:15.647940   11491 start.go:364] duration metric: took 265.583µs to acquireMachinesLock for "newest-cni-396000"
	I0318 13:58:15.648110   11491 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:58:15.648127   11491 fix.go:54] fixHost starting: 
	I0318 13:58:15.648813   11491 fix.go:112] recreateIfNeeded on newest-cni-396000: state=Stopped err=<nil>
	W0318 13:58:15.648840   11491 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:58:15.655655   11491 out.go:177] * Restarting existing qemu2 VM for "newest-cni-396000" ...
	I0318 13:58:15.660329   11491 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:14:0e:4f:ba:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18421-6777/.minikube/machines/newest-cni-396000/disk.qcow2
	I0318 13:58:15.669811   11491 main.go:141] libmachine: STDOUT: 
	I0318 13:58:15.669876   11491 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 13:58:15.669959   11491 fix.go:56] duration metric: took 21.830959ms for fixHost
	I0318 13:58:15.669979   11491 start.go:83] releasing machines lock for "newest-cni-396000", held for 22.013583ms
	W0318 13:58:15.670139   11491 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-396000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-396000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 13:58:15.678032   11491 out.go:177] 
	W0318 13:58:15.682122   11491 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 13:58:15.682181   11491 out.go:239] * 
	* 
	W0318 13:58:15.684384   11491 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:58:15.693152   11491 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-396000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000: exit status 7 (70.383334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-396000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-396000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000: exit status 7 (32.242959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-396000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-396000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-396000 --alsologtostderr -v=1: exit status 83 (44.2415ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-396000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-396000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:58:15.886118   11505 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:58:15.886280   11505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:58:15.886283   11505 out.go:304] Setting ErrFile to fd 2...
	I0318 13:58:15.886286   11505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:58:15.886424   11505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:58:15.886671   11505 out.go:298] Setting JSON to false
	I0318 13:58:15.886679   11505 mustload.go:65] Loading cluster: newest-cni-396000
	I0318 13:58:15.886875   11505 config.go:182] Loaded profile config "newest-cni-396000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 13:58:15.890963   11505 out.go:177] * The control-plane node newest-cni-396000 host is not running: state=Stopped
	I0318 13:58:15.895126   11505 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-396000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-396000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000: exit status 7 (32.627792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-396000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000: exit status 7 (32.265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-396000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 26.54
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.23
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.22
21 TestDownloadOnly/v1.29.0-rc.2/json-events 51.51
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.44
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.13
52 TestErrorSpam/stop 9.01
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.29
64 TestFunctional/serial/CacheCmd/cache/add_local 1.17
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.24
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.29
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 5.49
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.15
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.42
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.33
202 TestMainNoArgs 0.04
249 TestStoppedBinaryUpgrade/Setup 5.58
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.33
267 TestNoKubernetes/serial/Stop 3.13
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
276 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
286 TestStartStop/group/old-k8s-version/serial/Stop 1.93
287 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
291 TestStartStop/group/no-preload/serial/Stop 2.99
298 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
308 TestStartStop/group/embed-certs/serial/Stop 3.48
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.51
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
326 TestStartStop/group/newest-cni/serial/Stop 1.84
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-993000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-993000: exit status 85 (100.037375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:28 PDT |          |
	|         | -p download-only-993000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:28:38
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:28:38.465597    7238 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:28:38.465743    7238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:28:38.465747    7238 out.go:304] Setting ErrFile to fd 2...
	I0318 13:28:38.465749    7238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:28:38.466086    7238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	W0318 13:28:38.466206    7238 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18421-6777/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18421-6777/.minikube/config/config.json: no such file or directory
	I0318 13:28:38.467775    7238 out.go:298] Setting JSON to true
	I0318 13:28:38.489400    7238 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5290,"bootTime":1710788428,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:28:38.489465    7238 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:28:38.501770    7238 out.go:97] [download-only-993000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:28:38.504702    7238 out.go:169] MINIKUBE_LOCATION=18421
	I0318 13:28:38.501900    7238 notify.go:220] Checking for updates...
	W0318 13:28:38.501922    7238 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 13:28:38.526813    7238 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:28:38.529730    7238 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:28:38.533781    7238 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:28:38.537822    7238 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	W0318 13:28:38.544791    7238 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 13:28:38.545017    7238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:28:38.547722    7238 out.go:97] Using the qemu2 driver based on user configuration
	I0318 13:28:38.547746    7238 start.go:297] selected driver: qemu2
	I0318 13:28:38.547753    7238 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:28:38.547845    7238 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:28:38.550719    7238 out.go:169] Automatically selected the socket_vmnet network
	I0318 13:28:38.556263    7238 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 13:28:38.556389    7238 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 13:28:38.556472    7238 cni.go:84] Creating CNI manager for ""
	I0318 13:28:38.556494    7238 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 13:28:38.556548    7238 start.go:340] cluster config:
	{Name:download-only-993000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:28:38.561976    7238 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:28:38.565824    7238 out.go:97] Downloading VM boot image ...
	I0318 13:28:38.565863    7238 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso
	I0318 13:28:57.319991    7238 out.go:97] Starting "download-only-993000" primary control-plane node in "download-only-993000" cluster
	I0318 13:28:57.320030    7238 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 13:28:57.629276    7238 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 13:28:57.629323    7238 cache.go:56] Caching tarball of preloaded images
	I0318 13:28:57.630060    7238 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 13:28:57.635712    7238 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 13:28:57.635741    7238 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:28:58.252777    7238 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 13:29:18.690773    7238 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:29:18.690955    7238 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:29:19.388419    7238 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 13:29:19.388611    7238 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/download-only-993000/config.json ...
	I0318 13:29:19.388641    7238 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/download-only-993000/config.json: {Name:mk168a4f98d5d1e21683dd015f563fc2f060fdc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:29:19.389932    7238 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 13:29:19.390118    7238 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0318 13:29:19.726579    7238 out.go:169] 
	W0318 13:29:19.731766    7238 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18421-6777/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520 0x1085a3520] Decompressors:map[bz2:0x140006dbbc0 gz:0x140006dbbc8 tar:0x140006dbb70 tar.bz2:0x140006dbb80 tar.gz:0x140006dbb90 tar.xz:0x140006dbba0 tar.zst:0x140006dbbb0 tbz2:0x140006dbb80 tgz:0x140006dbb90 txz:0x140006dbba0 tzst:0x140006dbbb0 xz:0x140006dbbd0 zip:0x140006dbbe0 zst:0x140006dbbd8] Getters:map[file:0x14002030d80 http:0x1400057e190 https:0x1400057e1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0318 13:29:19.731789    7238 out_reason.go:110] 
	W0318 13:29:19.739609    7238 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:29:19.743637    7238 out.go:169] 
	
	
	* The control-plane node download-only-993000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-993000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-993000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (26.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-051000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-051000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (26.542730583s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (26.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-051000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-051000: exit status 85 (83.058166ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:28 PDT |                     |
	|         | -p download-only-993000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| delete  | -p download-only-993000        | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| start   | -o=json --download-only        | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT |                     |
	|         | -p download-only-051000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:29:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:29:20.421353    7298 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:29:20.421487    7298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:29:20.421491    7298 out.go:304] Setting ErrFile to fd 2...
	I0318 13:29:20.421493    7298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:29:20.421622    7298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:29:20.422727    7298 out.go:298] Setting JSON to true
	I0318 13:29:20.438935    7298 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5332,"bootTime":1710788428,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:29:20.438994    7298 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:29:20.443109    7298 out.go:97] [download-only-051000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:29:20.447092    7298 out.go:169] MINIKUBE_LOCATION=18421
	I0318 13:29:20.443214    7298 notify.go:220] Checking for updates...
	I0318 13:29:20.455052    7298 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:29:20.458057    7298 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:29:20.461135    7298 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:29:20.464035    7298 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	W0318 13:29:20.470156    7298 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 13:29:20.470370    7298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:29:20.474055    7298 out.go:97] Using the qemu2 driver based on user configuration
	I0318 13:29:20.474064    7298 start.go:297] selected driver: qemu2
	I0318 13:29:20.474068    7298 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:29:20.474119    7298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:29:20.477069    7298 out.go:169] Automatically selected the socket_vmnet network
	I0318 13:29:20.482167    7298 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 13:29:20.482258    7298 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 13:29:20.482292    7298 cni.go:84] Creating CNI manager for ""
	I0318 13:29:20.482300    7298 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:29:20.482312    7298 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:29:20.482347    7298 start.go:340] cluster config:
	{Name:download-only-051000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:29:20.486719    7298 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:29:20.488017    7298 out.go:97] Starting "download-only-051000" primary control-plane node in "download-only-051000" cluster
	I0318 13:29:20.488025    7298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:29:21.152127    7298 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:29:21.152195    7298 cache.go:56] Caching tarball of preloaded images
	I0318 13:29:21.152896    7298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:29:21.157686    7298 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 13:29:21.157717    7298 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:29:21.770838    7298 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 13:29:38.425647    7298 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:29:38.425823    7298 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:29:39.007799    7298 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:29:39.008001    7298 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/download-only-051000/config.json ...
	I0318 13:29:39.008018    7298 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/download-only-051000/config.json: {Name:mk52bff5367658d05c092decf55b50b7c9ae4179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:29:39.008251    7298 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:29:39.008363    7298 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-051000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-051000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-051000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (51.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-387000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-387000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (51.509426291s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (51.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-387000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-387000: exit status 85 (77.612333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:28 PDT |                     |
	|         | -p download-only-993000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| delete  | -p download-only-993000           | download-only-993000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| start   | -o=json --download-only           | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT |                     |
	|         | -p download-only-051000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| delete  | -p download-only-051000           | download-only-051000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT | 18 Mar 24 13:29 PDT |
	| start   | -o=json --download-only           | download-only-387000 | jenkins | v1.32.0 | 18 Mar 24 13:29 PDT |                     |
	|         | -p download-only-387000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:29:47
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:29:47.504187    7353 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:29:47.504330    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:29:47.504333    7353 out.go:304] Setting ErrFile to fd 2...
	I0318 13:29:47.504335    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:29:47.504471    7353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:29:47.505570    7353 out.go:298] Setting JSON to true
	I0318 13:29:47.521726    7353 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5359,"bootTime":1710788428,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:29:47.521788    7353 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:29:47.526330    7353 out.go:97] [download-only-387000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:29:47.530237    7353 out.go:169] MINIKUBE_LOCATION=18421
	I0318 13:29:47.526437    7353 notify.go:220] Checking for updates...
	I0318 13:29:47.538335    7353 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:29:47.541328    7353 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:29:47.544376    7353 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:29:47.547294    7353 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	W0318 13:29:47.553277    7353 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 13:29:47.553490    7353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:29:47.556309    7353 out.go:97] Using the qemu2 driver based on user configuration
	I0318 13:29:47.556317    7353 start.go:297] selected driver: qemu2
	I0318 13:29:47.556321    7353 start.go:901] validating driver "qemu2" against <nil>
	I0318 13:29:47.556372    7353 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:29:47.559234    7353 out.go:169] Automatically selected the socket_vmnet network
	I0318 13:29:47.564367    7353 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 13:29:47.564475    7353 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 13:29:47.564508    7353 cni.go:84] Creating CNI manager for ""
	I0318 13:29:47.564519    7353 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:29:47.564524    7353 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:29:47.564559    7353 start.go:340] cluster config:
	{Name:download-only-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:29:47.568903    7353 iso.go:125] acquiring lock: {Name:mk5b9b30a5de5f8265e1b5ca2b1cba833a75f2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:29:47.572358    7353 out.go:97] Starting "download-only-387000" primary control-plane node in "download-only-387000" cluster
	I0318 13:29:47.572369    7353 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 13:29:48.231423    7353 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 13:29:48.231538    7353 cache.go:56] Caching tarball of preloaded images
	I0318 13:29:48.232367    7353 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 13:29:48.237881    7353 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 13:29:48.237930    7353 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:29:48.866047    7353 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 13:30:04.593827    7353 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:30:04.593998    7353 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 13:30:05.151285    7353 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 13:30:05.151501    7353 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/download-only-387000/config.json ...
	I0318 13:30:05.151517    7353 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18421-6777/.minikube/profiles/download-only-387000/config.json: {Name:mk13914a693d9e183ce76a63cf9dad85ec42067e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:30:05.151746    7353 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 13:30:05.151862    7353 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18421-6777/.minikube/cache/darwin/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-387000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-387000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-387000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-417000 --alsologtostderr --binary-mirror http://127.0.0.1:50931 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-417000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-417000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-980000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-980000: exit status 85 (63.97325ms)

                                                
                                                
-- stdout --
	* Profile "addons-980000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-980000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-980000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-980000: exit status 85 (60.172959ms)

                                                
                                                
-- stdout --
	* Profile "addons-980000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-980000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.44s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 status: exit status 7 (34.385667ms)

                                                
                                                
-- stdout --
	nospam-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 status: exit status 7 (31.943917ms)

                                                
                                                
-- stdout --
	nospam-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 status: exit status 7 (31.855333ms)

                                                
                                                
-- stdout --
	nospam-652000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 pause: exit status 83 (41.811583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-652000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 pause: exit status 83 (38.482792ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-652000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 pause: exit status 83 (41.16975ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-652000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 unpause: exit status 83 (45.914958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-652000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 unpause: exit status 83 (40.797583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-652000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 unpause: exit status 83 (41.764333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-652000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-652000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (9.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 stop: (3.099187667s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 stop: (2.075474083s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-652000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-652000 stop: (3.829242542s)
--- PASS: TestErrorSpam/stop (9.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18421-6777/.minikube/files/etc/test/nested/copy/7236/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-229000 cache add registry.k8s.io/pause:3.1: (2.333041209s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-229000 cache add registry.k8s.io/pause:3.3: (2.169811792s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-229000 cache add registry.k8s.io/pause:latest: (1.790146459s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1392532263/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 cache add minikube-local-cache-test:functional-229000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 cache delete minikube-local-cache-test:functional-229000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-229000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 config get cpus: exit status 14 (32.294083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 config get cpus: exit status 14 (38.076917ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-229000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-229000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (156.717625ms)

                                                
                                                
-- stdout --
	* [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:32:33.874988    8104 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:32:33.875125    8104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:33.875130    8104 out.go:304] Setting ErrFile to fd 2...
	I0318 13:32:33.875133    8104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:33.875302    8104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:32:33.876504    8104 out.go:298] Setting JSON to false
	I0318 13:32:33.895278    8104 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5525,"bootTime":1710788428,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:32:33.895342    8104 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:32:33.900547    8104 out.go:177] * [functional-229000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 13:32:33.907481    8104 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:32:33.907533    8104 notify.go:220] Checking for updates...
	I0318 13:32:33.911505    8104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:32:33.914464    8104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:32:33.917501    8104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:32:33.920446    8104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:32:33.923323    8104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:32:33.926817    8104 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:32:33.927116    8104 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:32:33.931390    8104 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 13:32:33.938451    8104 start.go:297] selected driver: qemu2
	I0318 13:32:33.938459    8104 start.go:901] validating driver "qemu2" against &{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:32:33.938535    8104 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:32:33.944382    8104 out.go:177] 
	W0318 13:32:33.948439    8104 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0318 13:32:33.952396    8104 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-229000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-229000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-229000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.75225ms)

                                                
                                                
-- stdout --
	* [functional-229000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:32:34.103790    8115 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:32:34.103895    8115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:34.103898    8115 out.go:304] Setting ErrFile to fd 2...
	I0318 13:32:34.103901    8115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:34.104027    8115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18421-6777/.minikube/bin
	I0318 13:32:34.105407    8115 out.go:298] Setting JSON to false
	I0318 13:32:34.122114    8115 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5526,"bootTime":1710788428,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0318 13:32:34.122185    8115 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:32:34.127503    8115 out.go:177] * [functional-229000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0318 13:32:34.134392    8115 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 13:32:34.138485    8115 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	I0318 13:32:34.134496    8115 notify.go:220] Checking for updates...
	I0318 13:32:34.141467    8115 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 13:32:34.144388    8115 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:32:34.147473    8115 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	I0318 13:32:34.150477    8115 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:32:34.153819    8115 config.go:182] Loaded profile config "functional-229000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:32:34.154103    8115 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:32:34.158465    8115 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0318 13:32:34.165445    8115 start.go:297] selected driver: qemu2
	I0318 13:32:34.165452    8115 start.go:901] validating driver "qemu2" against &{Name:functional-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-229000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:32:34.165516    8115 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:32:34.172518    8115 out.go:177] 
	W0318 13:32:34.176260    8115 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 13:32:34.180448    8115 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.287677334s)
--- PASS: TestFunctional/parallel/License (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.453387375s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-229000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image rm gcr.io/google-containers/addon-resizer:functional-229000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-229000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 image save --daemon gcr.io/google-containers/addon-resizer:functional-229000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-229000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "69.805583ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.713542ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "75.412459ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.662417ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012374042s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-229000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-229000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-229000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-229000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-291000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-291000 --output=json --user=testUser: (3.421424584s)
--- PASS: TestJSONOutput/stop/Command (3.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-899000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-899000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.078709ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ceaefc8b-8506-464e-975a-0dcf552f3940","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-899000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec8bc02f-0d1b-48e7-8861-fe4f50dda26b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18421"}}
	{"specversion":"1.0","id":"0853d1e0-0441-4b3e-8bd2-e487907dc3a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig"}}
	{"specversion":"1.0","id":"8040b9f2-d00c-49cf-9727-ab289cdfd41f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"15bc1128-2024-4a40-a7dc-5b0bc7babf96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cf4e2b05-41b9-44b9-aa06-cacc10d65ab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube"}}
	{"specversion":"1.0","id":"18a69e47-5b00-419b-ab62-08d83d44dcca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c6436892-85b0-4667-a817-06fdd28c34cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-899000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-899000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-170000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-170000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.933792ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-170000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18421
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18421-6777/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18421-6777/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-170000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-170000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.757125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-170000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-170000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.636299584s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.692593292s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-170000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-170000: (3.126289791s)
--- PASS: TestNoKubernetes/serial/Stop (3.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-170000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-170000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.704875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-170000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-170000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-813000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-255000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-255000 --alsologtostderr -v=3: (1.927394833s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (57.421958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-255000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-205000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-205000 --alsologtostderr -v=3: (2.994102584s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-205000 -n no-preload-205000: exit status 7 (34.889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-205000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-142000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-142000 --alsologtostderr -v=3: (3.482925417s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (59.432167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-142000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-349000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-349000 --alsologtostderr -v=3: (3.509459958s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (63.415417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-349000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-396000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-396000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-396000 --alsologtostderr -v=3: (1.842569416s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-396000 -n newest-cni-396000: exit status 7 (60.955542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-396000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1077240695/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710793917085426000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1077240695/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710793917085426000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1077240695/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710793917085426000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1077240695/001/test-1710793917085426000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (57.926375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.535209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.309ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.167167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.855333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.041709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.222292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo umount -f /mount-9p": exit status 83 (47.488334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-229000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1077240695/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3756208786/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (64.643125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.828875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.143375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.068459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.979958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.622916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.390125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "sudo umount -f /mount-9p": exit status 83 (46.7345ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-229000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3756208786/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2469820959/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2469820959/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2469820959/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1: exit status 83 (85.600083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1: exit status 83 (86.921917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1: exit status 83 (89.306917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1: exit status 83 (88.291917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1: exit status 83 (85.733709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1: exit status 83 (88.015209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-229000 ssh "findmnt -T" /mount1: exit status 83 (86.555375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-229000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-229000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2469820959/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2469820959/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-229000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2469820959/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.15s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-099000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-099000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-099000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-099000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-099000"

                                                
                                                
----------------------- debugLogs end: cilium-099000 [took: 2.239947166s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-099000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-099000
--- SKIP: TestNetworkPlugins/group/cilium (2.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-941000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-941000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard