Test Report: QEMU_macOS 18517

                    
                      225d0002a402609a65399cabc142d90eb2090f83:2024-03-27:33764
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 42.61
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.01
36 TestAddons/Setup 10.34
37 TestCertOptions 10.14
38 TestCertExpiration 195.37
39 TestDockerFlags 10.27
40 TestForceSystemdFlag 10.16
41 TestForceSystemdEnv 10.23
47 TestErrorSpam/setup 10.09
56 TestFunctional/serial/StartWithProxy 9.94
58 TestFunctional/serial/SoftStart 5.27
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
70 TestFunctional/serial/MinikubeKubectlCmd 0.69
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
72 TestFunctional/serial/ExtraConfig 5.27
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.21
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.3
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.05
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 106.07
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.74
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 39.28
150 TestMultiControlPlane/serial/StartCluster 10.05
151 TestMultiControlPlane/serial/DeployApp 78.54
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.12
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
159 TestMultiControlPlane/serial/RestartSecondaryNode 54.31
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.79
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
164 TestMultiControlPlane/serial/StopCluster 2.19
165 TestMultiControlPlane/serial/RestartCluster 5.27
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
171 TestImageBuild/serial/Setup 9.96
174 TestJSONOutput/start/Command 9.88
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.3
206 TestMountStart/serial/StartWithMountFirst 10.65
209 TestMultiNode/serial/FreshStart2Nodes 10.03
210 TestMultiNode/serial/DeployApp2Nodes 118.81
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.11
215 TestMultiNode/serial/CopyFile 0.07
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 57.91
218 TestMultiNode/serial/RestartKeepsNodes 9.36
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 3.71
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.11
226 TestPreload 10.03
228 TestScheduledStopUnix 10
229 TestSkaffold 16.87
232 TestRunningBinaryUpgrade 620.21
234 TestKubernetesUpgrade 18.13
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.2
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.29
250 TestStoppedBinaryUpgrade/Upgrade 580.28
252 TestPause/serial/Start 10.1
262 TestNoKubernetes/serial/StartWithK8s 9.87
263 TestNoKubernetes/serial/StartWithStopK8s 5.88
264 TestNoKubernetes/serial/Start 5.9
268 TestNoKubernetes/serial/StartNoArgs 5.88
270 TestNetworkPlugins/group/auto/Start 9.84
271 TestNetworkPlugins/group/flannel/Start 9.86
272 TestNetworkPlugins/group/kindnet/Start 10.02
273 TestNetworkPlugins/group/enable-default-cni/Start 9.78
274 TestNetworkPlugins/group/bridge/Start 9.9
275 TestNetworkPlugins/group/kubenet/Start 9.85
276 TestNetworkPlugins/group/custom-flannel/Start 9.86
277 TestNetworkPlugins/group/calico/Start 9.81
279 TestNetworkPlugins/group/false/Start 9.84
281 TestStartStop/group/old-k8s-version/serial/FirstStart 10.29
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.28
288 TestStartStop/group/no-preload/serial/FirstStart 9.96
289 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/old-k8s-version/serial/Pause 0.11
294 TestStartStop/group/embed-certs/serial/FirstStart 10.07
295 TestStartStop/group/no-preload/serial/DeployApp 0.09
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
299 TestStartStop/group/no-preload/serial/SecondStart 7.44
300 TestStartStop/group/embed-certs/serial/DeployApp 0.09
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
304 TestStartStop/group/embed-certs/serial/SecondStart 6.12
305 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
306 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
307 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
308 TestStartStop/group/no-preload/serial/Pause 0.11
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.04
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/embed-certs/serial/Pause 0.11
316 TestStartStop/group/newest-cni/serial/FirstStart 9.86
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
326 TestStartStop/group/newest-cni/serial/SecondStart 5.26
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (42.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-853000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-853000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (42.609967917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c6ab0f4a-a6e1-44a2-863d-52aec4fbe438","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-853000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c0eaca39-6f2a-474f-accb-89b14448be23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18517"}}
	{"specversion":"1.0","id":"6adedaca-1230-44b9-acea-5cf13113e09e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig"}}
	{"specversion":"1.0","id":"1fbb366b-e67b-4727-94d0-6fa82754fc14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a770de6f-4926-4b90-ad3f-d6c653ffe175","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1b834008-25a1-4c61-966c-6b7f798479c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube"}}
	{"specversion":"1.0","id":"facc5a0a-ab15-49e7-b153-24dc0d9e4648","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"2490fc91-f353-4dc6-b092-bde48699f072","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb0121d6-5e01-4986-8a72-8c55eabd1a2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"289aa9cc-195f-44a6-a876-92521026749b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ccd86965-0e57-40da-bc82-9a315f6411ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-853000\" primary control-plane node in \"download-only-853000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5bb75fec-590e-49af-b1c7-92cd9b2d1fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3602384a-baf3-43be-b68c-421b53fc0b5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1083df220 0x1083df220 0x1083df220 0x1083df220 0x1083df220 0x1083df220 0x1083df220] Decompressors:map[bz2:0x1400063f830 gz:0x1400063f838 tar:0x1400063f7e0 tar.bz2:0x1400063f7f0 tar.gz:0x1400063f800 tar.xz:0x1400063f810 tar.zst:0x1400063f820 tbz2:0x1400063f7f0 tgz:0x14
00063f800 txz:0x1400063f810 tzst:0x1400063f820 xz:0x1400063f840 zip:0x1400063f850 zst:0x1400063f848] Getters:map[file:0x14000526960 http:0x140000b4a50 https:0x140000b4aa0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"8c7506e5-5289-4685-a1be-20a5bb2dd4b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 10:51:24.276129    6478 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:51:24.276285    6478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:51:24.276288    6478 out.go:304] Setting ErrFile to fd 2...
	I0327 10:51:24.276291    6478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:51:24.276409    6478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	W0327 10:51:24.276473    6478 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18517-6013/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18517-6013/.minikube/config/config.json: no such file or directory
	I0327 10:51:24.277726    6478 out.go:298] Setting JSON to true
	I0327 10:51:24.295447    6478 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4857,"bootTime":1711557027,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:51:24.295515    6478 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:51:24.307821    6478 out.go:97] [download-only-853000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 10:51:24.311640    6478 out.go:169] MINIKUBE_LOCATION=18517
	W0327 10:51:24.307964    6478 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 10:51:24.307989    6478 notify.go:220] Checking for updates...
	I0327 10:51:24.335875    6478 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:51:24.339720    6478 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:51:24.343644    6478 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:51:24.349974    6478 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	W0327 10:51:24.356722    6478 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 10:51:24.356941    6478 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:51:24.360649    6478 out.go:97] Using the qemu2 driver based on user configuration
	I0327 10:51:24.360670    6478 start.go:297] selected driver: qemu2
	I0327 10:51:24.360688    6478 start.go:901] validating driver "qemu2" against <nil>
	I0327 10:51:24.360756    6478 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 10:51:24.363648    6478 out.go:169] Automatically selected the socket_vmnet network
	I0327 10:51:24.370571    6478 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 10:51:24.370693    6478 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 10:51:24.370776    6478 cni.go:84] Creating CNI manager for ""
	I0327 10:51:24.370797    6478 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 10:51:24.370871    6478 start.go:340] cluster config:
	{Name:download-only-853000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0327 10:51:24.376461    6478 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 10:51:24.380642    6478 out.go:97] Downloading VM boot image ...
	I0327 10:51:24.380687    6478 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso
	I0327 10:51:43.555953    6478 out.go:97] Starting "download-only-853000" primary control-plane node in "download-only-853000" cluster
	I0327 10:51:43.555984    6478 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 10:51:43.842320    6478 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 10:51:43.842370    6478 cache.go:56] Caching tarball of preloaded images
	I0327 10:51:43.843088    6478 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 10:51:43.847621    6478 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 10:51:43.847652    6478 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:51:44.524845    6478 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 10:52:05.454718    6478 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:52:05.454903    6478 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:52:06.156780    6478 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 10:52:06.156982    6478 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/download-only-853000/config.json ...
	I0327 10:52:06.157001    6478 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/download-only-853000/config.json: {Name:mk9a4ae1f3d56b5a01555ebba0017eed2a21b775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 10:52:06.158126    6478 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 10:52:06.158560    6478 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0327 10:52:06.807429    6478 out.go:169] 
	W0327 10:52:06.812535    6478 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1083df220 0x1083df220 0x1083df220 0x1083df220 0x1083df220 0x1083df220 0x1083df220] Decompressors:map[bz2:0x1400063f830 gz:0x1400063f838 tar:0x1400063f7e0 tar.bz2:0x1400063f7f0 tar.gz:0x1400063f800 tar.xz:0x1400063f810 tar.zst:0x1400063f820 tbz2:0x1400063f7f0 tgz:0x1400063f800 txz:0x1400063f810 tzst:0x1400063f820 xz:0x1400063f840 zip:0x1400063f850 zst:0x1400063f848] Getters:map[file:0x14000526960 http:0x140000b4a50 https:0x140000b4aa0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0327 10:52:06.812559    6478 out_reason.go:110] 
	W0327 10:52:06.820392    6478 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 10:52:06.824407    6478 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-853000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (42.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-896000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-896000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.83890925s)

                                                
                                                
-- stdout --
	* [offline-docker-896000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-896000" primary control-plane node in "offline-docker-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:06:24.190005    8748 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:06:24.190153    8748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:06:24.190156    8748 out.go:304] Setting ErrFile to fd 2...
	I0327 11:06:24.190159    8748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:06:24.190295    8748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:06:24.191435    8748 out.go:298] Setting JSON to false
	I0327 11:06:24.209044    8748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5757,"bootTime":1711557027,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:06:24.209124    8748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:06:24.214810    8748 out.go:177] * [offline-docker-896000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:06:24.221643    8748 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:06:24.224682    8748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:06:24.221668    8748 notify.go:220] Checking for updates...
	I0327 11:06:24.230597    8748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:06:24.233680    8748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:06:24.236706    8748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:06:24.237995    8748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:06:24.241075    8748 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:06:24.241153    8748 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:06:24.245706    8748 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:06:24.250641    8748 start.go:297] selected driver: qemu2
	I0327 11:06:24.250650    8748 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:06:24.250659    8748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:06:24.252661    8748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:06:24.255610    8748 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:06:24.258764    8748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:06:24.258799    8748 cni.go:84] Creating CNI manager for ""
	I0327 11:06:24.258805    8748 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:06:24.258809    8748 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:06:24.258843    8748 start.go:340] cluster config:
	{Name:offline-docker-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cl
ient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:06:24.263374    8748 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:06:24.274693    8748 out.go:177] * Starting "offline-docker-896000" primary control-plane node in "offline-docker-896000" cluster
	I0327 11:06:24.278628    8748 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:06:24.278660    8748 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:06:24.278668    8748 cache.go:56] Caching tarball of preloaded images
	I0327 11:06:24.278739    8748 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:06:24.278744    8748 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:06:24.278811    8748 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/offline-docker-896000/config.json ...
	I0327 11:06:24.278822    8748 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/offline-docker-896000/config.json: {Name:mk218d82f30fdce8c325b86f551f31afc6ac3b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:06:24.279110    8748 start.go:360] acquireMachinesLock for offline-docker-896000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:06:24.279139    8748 start.go:364] duration metric: took 22.959µs to acquireMachinesLock for "offline-docker-896000"
	I0327 11:06:24.279154    8748 start.go:93] Provisioning new machine with config: &{Name:offline-docker-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterNam
e:offline-docker-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:06:24.279179    8748 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:06:24.283712    8748 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 11:06:24.299009    8748 start.go:159] libmachine.API.Create for "offline-docker-896000" (driver="qemu2")
	I0327 11:06:24.299052    8748 client.go:168] LocalClient.Create starting
	I0327 11:06:24.299123    8748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:06:24.299154    8748 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:24.299165    8748 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:24.299213    8748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:06:24.299235    8748 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:24.299241    8748 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:24.299601    8748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:06:24.446618    8748 main.go:141] libmachine: Creating SSH key...
	I0327 11:06:24.509901    8748 main.go:141] libmachine: Creating Disk image...
	I0327 11:06:24.509916    8748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:06:24.510102    8748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2
	I0327 11:06:24.523790    8748 main.go:141] libmachine: STDOUT: 
	I0327 11:06:24.523818    8748 main.go:141] libmachine: STDERR: 
	I0327 11:06:24.523888    8748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2 +20000M
	I0327 11:06:24.542933    8748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:06:24.542957    8748 main.go:141] libmachine: STDERR: 
	I0327 11:06:24.542977    8748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2
	I0327 11:06:24.542980    8748 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:06:24.543020    8748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c8:66:0b:45:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2
	I0327 11:06:24.544735    8748 main.go:141] libmachine: STDOUT: 
	I0327 11:06:24.544756    8748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:06:24.544775    8748 client.go:171] duration metric: took 245.718458ms to LocalClient.Create
	I0327 11:06:26.546826    8748 start.go:128] duration metric: took 2.267654667s to createHost
	I0327 11:06:26.546840    8748 start.go:83] releasing machines lock for "offline-docker-896000", held for 2.267711708s
	W0327 11:06:26.546858    8748 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:26.554309    8748 out.go:177] * Deleting "offline-docker-896000" in qemu2 ...
	W0327 11:06:26.564999    8748 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:26.565007    8748 start.go:728] Will try again in 5 seconds ...
	I0327 11:06:31.567285    8748 start.go:360] acquireMachinesLock for offline-docker-896000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:06:31.567745    8748 start.go:364] duration metric: took 332.458µs to acquireMachinesLock for "offline-docker-896000"
	I0327 11:06:31.567855    8748 start.go:93] Provisioning new machine with config: &{Name:offline-docker-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterNam
e:offline-docker-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:06:31.568064    8748 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:06:31.576292    8748 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 11:06:31.624953    8748 start.go:159] libmachine.API.Create for "offline-docker-896000" (driver="qemu2")
	I0327 11:06:31.625004    8748 client.go:168] LocalClient.Create starting
	I0327 11:06:31.625127    8748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:06:31.625192    8748 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:31.625209    8748 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:31.625294    8748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:06:31.625336    8748 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:31.625346    8748 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:31.625872    8748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:06:31.781937    8748 main.go:141] libmachine: Creating SSH key...
	I0327 11:06:31.927045    8748 main.go:141] libmachine: Creating Disk image...
	I0327 11:06:31.927058    8748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:06:31.927224    8748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2
	I0327 11:06:31.939618    8748 main.go:141] libmachine: STDOUT: 
	I0327 11:06:31.939638    8748 main.go:141] libmachine: STDERR: 
	I0327 11:06:31.939705    8748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2 +20000M
	I0327 11:06:31.950347    8748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:06:31.950363    8748 main.go:141] libmachine: STDERR: 
	I0327 11:06:31.950379    8748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2
	I0327 11:06:31.950386    8748 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:06:31.950429    8748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:14:1a:1e:1f:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/offline-docker-896000/disk.qcow2
	I0327 11:06:31.952074    8748 main.go:141] libmachine: STDOUT: 
	I0327 11:06:31.952088    8748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:06:31.952104    8748 client.go:171] duration metric: took 327.09575ms to LocalClient.Create
	I0327 11:06:33.954276    8748 start.go:128] duration metric: took 2.386183667s to createHost
	I0327 11:06:33.954341    8748 start.go:83] releasing machines lock for "offline-docker-896000", held for 2.386592167s
	W0327 11:06:33.954685    8748 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:33.966227    8748 out.go:177] 
	W0327 11:06:33.970341    8748 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:06:33.970382    8748 out.go:239] * 
	* 
	W0327 11:06:33.972972    8748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:06:33.982242    8748 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-896000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-27 11:06:33.998767 -0700 PDT m=+909.806859584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-896000 -n offline-docker-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-896000 -n offline-docker-896000: exit status 7 (68.6355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-896000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-896000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-896000
--- FAIL: TestOffline (10.01s)

                                                
                                    
x
+
TestAddons/Setup (10.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-723000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-723000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.33593025s)

                                                
                                                
-- stdout --
	* [addons-723000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-723000" primary control-plane node in "addons-723000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-723000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 10:54:41.429174    6846 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:54:41.429300    6846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:54:41.429303    6846 out.go:304] Setting ErrFile to fd 2...
	I0327 10:54:41.429306    6846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:54:41.429446    6846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:54:41.430548    6846 out.go:298] Setting JSON to false
	I0327 10:54:41.446593    6846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5054,"bootTime":1711557027,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:54:41.446657    6846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:54:41.451383    6846 out.go:177] * [addons-723000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 10:54:41.458355    6846 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 10:54:41.461434    6846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:54:41.458394    6846 notify.go:220] Checking for updates...
	I0327 10:54:41.464354    6846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:54:41.467271    6846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:54:41.470355    6846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 10:54:41.473327    6846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 10:54:41.476406    6846 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:54:41.483226    6846 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 10:54:41.487275    6846 start.go:297] selected driver: qemu2
	I0327 10:54:41.487280    6846 start.go:901] validating driver "qemu2" against <nil>
	I0327 10:54:41.487286    6846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 10:54:41.489529    6846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 10:54:41.492352    6846 out.go:177] * Automatically selected the socket_vmnet network
	I0327 10:54:41.495411    6846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 10:54:41.495462    6846 cni.go:84] Creating CNI manager for ""
	I0327 10:54:41.495469    6846 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 10:54:41.495474    6846 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 10:54:41.495510    6846 start.go:340] cluster config:
	{Name:addons-723000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:54:41.499896    6846 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 10:54:41.508377    6846 out.go:177] * Starting "addons-723000" primary control-plane node in "addons-723000" cluster
	I0327 10:54:41.512315    6846 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 10:54:41.512332    6846 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 10:54:41.512346    6846 cache.go:56] Caching tarball of preloaded images
	I0327 10:54:41.512408    6846 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 10:54:41.512414    6846 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 10:54:41.512667    6846 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/addons-723000/config.json ...
	I0327 10:54:41.512680    6846 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/addons-723000/config.json: {Name:mk0086279c7549e4d3dfdb6c3871c8187b266e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 10:54:41.512925    6846 start.go:360] acquireMachinesLock for addons-723000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 10:54:41.513054    6846 start.go:364] duration metric: took 122.375µs to acquireMachinesLock for "addons-723000"
	I0327 10:54:41.513068    6846 start.go:93] Provisioning new machine with config: &{Name:addons-723000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons
-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 10:54:41.513109    6846 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 10:54:41.521339    6846 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 10:54:41.541133    6846 start.go:159] libmachine.API.Create for "addons-723000" (driver="qemu2")
	I0327 10:54:41.541156    6846 client.go:168] LocalClient.Create starting
	I0327 10:54:41.541292    6846 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 10:54:41.718966    6846 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 10:54:41.848957    6846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 10:54:42.041983    6846 main.go:141] libmachine: Creating SSH key...
	I0327 10:54:42.107280    6846 main.go:141] libmachine: Creating Disk image...
	I0327 10:54:42.107286    6846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 10:54:42.107465    6846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2
	I0327 10:54:42.120427    6846 main.go:141] libmachine: STDOUT: 
	I0327 10:54:42.120451    6846 main.go:141] libmachine: STDERR: 
	I0327 10:54:42.120497    6846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2 +20000M
	I0327 10:54:42.131336    6846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 10:54:42.131351    6846 main.go:141] libmachine: STDERR: 
	I0327 10:54:42.131362    6846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2
	I0327 10:54:42.131368    6846 main.go:141] libmachine: Starting QEMU VM...
	I0327 10:54:42.131393    6846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:b7:25:00:fb:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2
	I0327 10:54:42.133122    6846 main.go:141] libmachine: STDOUT: 
	I0327 10:54:42.133143    6846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 10:54:42.133160    6846 client.go:171] duration metric: took 592.001792ms to LocalClient.Create
	I0327 10:54:44.135339    6846 start.go:128] duration metric: took 2.622221625s to createHost
	I0327 10:54:44.135404    6846 start.go:83] releasing machines lock for "addons-723000", held for 2.6223555s
	W0327 10:54:44.135511    6846 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:54:44.150831    6846 out.go:177] * Deleting "addons-723000" in qemu2 ...
	W0327 10:54:44.179583    6846 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:54:44.179610    6846 start.go:728] Will try again in 5 seconds ...
	I0327 10:54:49.181829    6846 start.go:360] acquireMachinesLock for addons-723000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 10:54:49.182355    6846 start.go:364] duration metric: took 398.375µs to acquireMachinesLock for "addons-723000"
	I0327 10:54:49.182561    6846 start.go:93] Provisioning new machine with config: &{Name:addons-723000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons
-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 10:54:49.182855    6846 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 10:54:49.192407    6846 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 10:54:49.242156    6846 start.go:159] libmachine.API.Create for "addons-723000" (driver="qemu2")
	I0327 10:54:49.242215    6846 client.go:168] LocalClient.Create starting
	I0327 10:54:49.242329    6846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 10:54:49.242388    6846 main.go:141] libmachine: Decoding PEM data...
	I0327 10:54:49.242417    6846 main.go:141] libmachine: Parsing certificate...
	I0327 10:54:49.242479    6846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 10:54:49.242520    6846 main.go:141] libmachine: Decoding PEM data...
	I0327 10:54:49.242535    6846 main.go:141] libmachine: Parsing certificate...
	I0327 10:54:49.243085    6846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 10:54:49.424170    6846 main.go:141] libmachine: Creating SSH key...
	I0327 10:54:49.658018    6846 main.go:141] libmachine: Creating Disk image...
	I0327 10:54:49.658026    6846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 10:54:49.658283    6846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2
	I0327 10:54:49.671489    6846 main.go:141] libmachine: STDOUT: 
	I0327 10:54:49.671515    6846 main.go:141] libmachine: STDERR: 
	I0327 10:54:49.671572    6846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2 +20000M
	I0327 10:54:49.684180    6846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 10:54:49.684201    6846 main.go:141] libmachine: STDERR: 
	I0327 10:54:49.684223    6846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2
	I0327 10:54:49.684228    6846 main.go:141] libmachine: Starting QEMU VM...
	I0327 10:54:49.684258    6846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:77:7a:fa:be:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/addons-723000/disk.qcow2
	I0327 10:54:49.686132    6846 main.go:141] libmachine: STDOUT: 
	I0327 10:54:49.686147    6846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 10:54:49.686162    6846 client.go:171] duration metric: took 443.945792ms to LocalClient.Create
	I0327 10:54:51.688393    6846 start.go:128] duration metric: took 2.50548775s to createHost
	I0327 10:54:51.688480    6846 start.go:83] releasing machines lock for "addons-723000", held for 2.506058458s
	W0327 10:54:51.688887    6846 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-723000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-723000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:54:51.697826    6846 out.go:177] 
	W0327 10:54:51.708014    6846 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 10:54:51.708084    6846 out.go:239] * 
	* 
	W0327 10:54:51.710693    6846 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 10:54:51.723834    6846 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-723000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.34s)

                                                
                                    
x
+
TestCertOptions (10.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-567000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-567000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.84638675s)

                                                
                                                
-- stdout --
	* [cert-options-567000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-567000" primary control-plane node in "cert-options-567000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-567000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-567000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-567000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-567000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-567000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.446083ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-567000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-567000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-567000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-567000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-567000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-567000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.503708ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-567000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-567000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-567000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-567000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-567000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-27 11:07:04.668599 -0700 PDT m=+940.476889334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-567000 -n cert-options-567000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-567000 -n cert-options-567000: exit status 7 (32.312792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-567000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-567000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-567000
--- FAIL: TestCertOptions (10.14s)

                                                
                                    
x
+
TestCertExpiration (195.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-739000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-739000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.974457959s)

                                                
                                                
-- stdout --
	* [cert-expiration-739000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-739000" primary control-plane node in "cert-expiration-739000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-739000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-739000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-739000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-739000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-739000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.233177375s)

                                                
                                                
-- stdout --
	* [cert-expiration-739000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-739000" primary control-plane node in "cert-expiration-739000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-739000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-739000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-739000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-739000" primary control-plane node in "cert-expiration-739000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-739000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-27 11:10:04.68377 -0700 PDT m=+1120.493222418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-739000 -n cert-expiration-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-739000 -n cert-expiration-739000: exit status 7 (52.221833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-739000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-739000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-739000
--- FAIL: TestCertExpiration (195.37s)

                                                
                                    
x
+
TestDockerFlags (10.27s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-486000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-486000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.006412292s)

                                                
                                                
-- stdout --
	* [docker-flags-486000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-486000" primary control-plane node in "docker-flags-486000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-486000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:06:44.425418    8963 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:06:44.425545    8963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:06:44.425548    8963 out.go:304] Setting ErrFile to fd 2...
	I0327 11:06:44.425550    8963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:06:44.425691    8963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:06:44.426739    8963 out.go:298] Setting JSON to false
	I0327 11:06:44.442791    8963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5777,"bootTime":1711557027,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:06:44.442854    8963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:06:44.448554    8963 out.go:177] * [docker-flags-486000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:06:44.459533    8963 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:06:44.454580    8963 notify.go:220] Checking for updates...
	I0327 11:06:44.466443    8963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:06:44.470447    8963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:06:44.473423    8963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:06:44.476424    8963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:06:44.479476    8963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:06:44.482749    8963 config.go:182] Loaded profile config "force-systemd-flag-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:06:44.482812    8963 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:06:44.482866    8963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:06:44.487487    8963 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:06:44.494428    8963 start.go:297] selected driver: qemu2
	I0327 11:06:44.494434    8963 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:06:44.494439    8963 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:06:44.496651    8963 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:06:44.500450    8963 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:06:44.503620    8963 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0327 11:06:44.503682    8963 cni.go:84] Creating CNI manager for ""
	I0327 11:06:44.503690    8963 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:06:44.503695    8963 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:06:44.503737    8963 start.go:340] cluster config:
	{Name:docker-flags-486000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-486000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt
/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:06:44.508370    8963 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:06:44.515476    8963 out.go:177] * Starting "docker-flags-486000" primary control-plane node in "docker-flags-486000" cluster
	I0327 11:06:44.518413    8963 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:06:44.518427    8963 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:06:44.518435    8963 cache.go:56] Caching tarball of preloaded images
	I0327 11:06:44.518488    8963 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:06:44.518494    8963 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:06:44.518554    8963 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/docker-flags-486000/config.json ...
	I0327 11:06:44.518567    8963 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/docker-flags-486000/config.json: {Name:mk9d9e8b499370ad0ab5f5995503da11b0ed5de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:06:44.518854    8963 start.go:360] acquireMachinesLock for docker-flags-486000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:06:44.518888    8963 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "docker-flags-486000"
	I0327 11:06:44.518905    8963 start.go:93] Provisioning new machine with config: &{Name:docker-flags-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:docker-flags-486000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:06:44.518937    8963 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:06:44.522552    8963 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 11:06:44.540482    8963 start.go:159] libmachine.API.Create for "docker-flags-486000" (driver="qemu2")
	I0327 11:06:44.540509    8963 client.go:168] LocalClient.Create starting
	I0327 11:06:44.540568    8963 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:06:44.540600    8963 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:44.540608    8963 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:44.540655    8963 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:06:44.540677    8963 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:44.540684    8963 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:44.541050    8963 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:06:44.684371    8963 main.go:141] libmachine: Creating SSH key...
	I0327 11:06:44.760706    8963 main.go:141] libmachine: Creating Disk image...
	I0327 11:06:44.760711    8963 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:06:44.760874    8963 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2
	I0327 11:06:44.773561    8963 main.go:141] libmachine: STDOUT: 
	I0327 11:06:44.773578    8963 main.go:141] libmachine: STDERR: 
	I0327 11:06:44.773636    8963 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2 +20000M
	I0327 11:06:44.784397    8963 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:06:44.784413    8963 main.go:141] libmachine: STDERR: 
	I0327 11:06:44.784424    8963 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2
	I0327 11:06:44.784430    8963 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:06:44.784464    8963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f1:14:44:a1:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2
	I0327 11:06:44.786279    8963 main.go:141] libmachine: STDOUT: 
	I0327 11:06:44.786296    8963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:06:44.786317    8963 client.go:171] duration metric: took 245.80425ms to LocalClient.Create
	I0327 11:06:46.788540    8963 start.go:128] duration metric: took 2.269594542s to createHost
	I0327 11:06:46.788588    8963 start.go:83] releasing machines lock for "docker-flags-486000", held for 2.269701625s
	W0327 11:06:46.788646    8963 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:46.803831    8963 out.go:177] * Deleting "docker-flags-486000" in qemu2 ...
	W0327 11:06:46.831390    8963 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:46.831414    8963 start.go:728] Will try again in 5 seconds ...
	I0327 11:06:51.833605    8963 start.go:360] acquireMachinesLock for docker-flags-486000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:06:51.873762    8963 start.go:364] duration metric: took 40.006834ms to acquireMachinesLock for "docker-flags-486000"
	I0327 11:06:51.873878    8963 start.go:93] Provisioning new machine with config: &{Name:docker-flags-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:docker-flags-486000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:06:51.874274    8963 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:06:51.883468    8963 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 11:06:51.931811    8963 start.go:159] libmachine.API.Create for "docker-flags-486000" (driver="qemu2")
	I0327 11:06:51.931855    8963 client.go:168] LocalClient.Create starting
	I0327 11:06:51.931983    8963 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:06:51.932041    8963 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:51.932057    8963 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:51.932121    8963 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:06:51.932166    8963 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:51.932181    8963 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:51.932708    8963 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:06:52.114896    8963 main.go:141] libmachine: Creating SSH key...
	I0327 11:06:52.317086    8963 main.go:141] libmachine: Creating Disk image...
	I0327 11:06:52.317093    8963 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:06:52.317295    8963 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2
	I0327 11:06:52.330434    8963 main.go:141] libmachine: STDOUT: 
	I0327 11:06:52.330455    8963 main.go:141] libmachine: STDERR: 
	I0327 11:06:52.330522    8963 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2 +20000M
	I0327 11:06:52.341265    8963 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:06:52.341282    8963 main.go:141] libmachine: STDERR: 
	I0327 11:06:52.341294    8963 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2
	I0327 11:06:52.341297    8963 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:06:52.341344    8963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:64:de:a7:e5:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/docker-flags-486000/disk.qcow2
	I0327 11:06:52.343043    8963 main.go:141] libmachine: STDOUT: 
	I0327 11:06:52.343059    8963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:06:52.343073    8963 client.go:171] duration metric: took 411.214416ms to LocalClient.Create
	I0327 11:06:54.345255    8963 start.go:128] duration metric: took 2.470963166s to createHost
	I0327 11:06:54.345323    8963 start.go:83] releasing machines lock for "docker-flags-486000", held for 2.47154325s
	W0327 11:06:54.345724    8963 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:54.359395    8963 out.go:177] 
	W0327 11:06:54.371732    8963 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:06:54.371775    8963 out.go:239] * 
	* 
	W0327 11:06:54.375690    8963 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:06:54.387399    8963 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-486000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-486000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-486000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (88.165875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-486000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-486000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-486000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-486000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-486000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-486000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-486000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-486000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-486000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.772042ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-486000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-486000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-486000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-486000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-486000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-486000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-27 11:06:54.537538 -0700 PDT m=+930.345762751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-486000 -n docker-flags-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-486000 -n docker-flags-486000: exit status 7 (30.840667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-486000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-486000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-486000
--- FAIL: TestDockerFlags (10.27s)

                                                
                                    
x
+
TestForceSystemdFlag (10.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-743000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-743000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.934931583s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-743000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-743000" primary control-plane node in "force-systemd-flag-743000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-743000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:06:39.341689    8934 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:06:39.341800    8934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:06:39.341803    8934 out.go:304] Setting ErrFile to fd 2...
	I0327 11:06:39.341806    8934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:06:39.341934    8934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:06:39.342973    8934 out.go:298] Setting JSON to false
	I0327 11:06:39.358988    8934 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5772,"bootTime":1711557027,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:06:39.359067    8934 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:06:39.364920    8934 out.go:177] * [force-systemd-flag-743000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:06:39.372020    8934 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:06:39.376915    8934 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:06:39.372087    8934 notify.go:220] Checking for updates...
	I0327 11:06:39.383904    8934 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:06:39.390879    8934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:06:39.393944    8934 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:06:39.396913    8934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:06:39.400235    8934 config.go:182] Loaded profile config "force-systemd-env-226000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:06:39.400304    8934 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:06:39.400349    8934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:06:39.404917    8934 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:06:39.411880    8934 start.go:297] selected driver: qemu2
	I0327 11:06:39.411887    8934 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:06:39.411892    8934 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:06:39.414183    8934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:06:39.418939    8934 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:06:39.421964    8934 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 11:06:39.422001    8934 cni.go:84] Creating CNI manager for ""
	I0327 11:06:39.422010    8934 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:06:39.422014    8934 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:06:39.422048    8934 start.go:340] cluster config:
	{Name:force-systemd-flag-743000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:06:39.426680    8934 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:06:39.433907    8934 out.go:177] * Starting "force-systemd-flag-743000" primary control-plane node in "force-systemd-flag-743000" cluster
	I0327 11:06:39.437920    8934 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:06:39.437936    8934 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:06:39.437945    8934 cache.go:56] Caching tarball of preloaded images
	I0327 11:06:39.438004    8934 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:06:39.438010    8934 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:06:39.438098    8934 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/force-systemd-flag-743000/config.json ...
	I0327 11:06:39.438112    8934 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/force-systemd-flag-743000/config.json: {Name:mk63476e37a19f5136b232b712434ed64656c90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:06:39.438356    8934 start.go:360] acquireMachinesLock for force-systemd-flag-743000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:06:39.438394    8934 start.go:364] duration metric: took 30.75µs to acquireMachinesLock for "force-systemd-flag-743000"
	I0327 11:06:39.438415    8934 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:force-systemd-flag-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:06:39.438449    8934 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:06:39.441946    8934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 11:06:39.459715    8934 start.go:159] libmachine.API.Create for "force-systemd-flag-743000" (driver="qemu2")
	I0327 11:06:39.459740    8934 client.go:168] LocalClient.Create starting
	I0327 11:06:39.459796    8934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:06:39.459825    8934 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:39.459837    8934 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:39.459881    8934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:06:39.459903    8934 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:39.459910    8934 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:39.460282    8934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:06:39.605316    8934 main.go:141] libmachine: Creating SSH key...
	I0327 11:06:39.758786    8934 main.go:141] libmachine: Creating Disk image...
	I0327 11:06:39.758798    8934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:06:39.758974    8934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0327 11:06:39.771381    8934 main.go:141] libmachine: STDOUT: 
	I0327 11:06:39.771398    8934 main.go:141] libmachine: STDERR: 
	I0327 11:06:39.771448    8934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2 +20000M
	I0327 11:06:39.782079    8934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:06:39.782106    8934 main.go:141] libmachine: STDERR: 
	I0327 11:06:39.782122    8934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0327 11:06:39.782129    8934 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:06:39.782155    8934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:68:99:83:5d:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0327 11:06:39.783931    8934 main.go:141] libmachine: STDOUT: 
	I0327 11:06:39.783944    8934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:06:39.783963    8934 client.go:171] duration metric: took 324.219166ms to LocalClient.Create
	I0327 11:06:41.786180    8934 start.go:128] duration metric: took 2.347722416s to createHost
	I0327 11:06:41.786337    8934 start.go:83] releasing machines lock for "force-systemd-flag-743000", held for 2.347881375s
	W0327 11:06:41.786541    8934 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:41.814848    8934 out.go:177] * Deleting "force-systemd-flag-743000" in qemu2 ...
	W0327 11:06:41.836938    8934 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:41.836963    8934 start.go:728] Will try again in 5 seconds ...
	I0327 11:06:46.839045    8934 start.go:360] acquireMachinesLock for force-systemd-flag-743000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:06:46.839250    8934 start.go:364] duration metric: took 155.125µs to acquireMachinesLock for "force-systemd-flag-743000"
	I0327 11:06:46.839336    8934 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:force-systemd-flag-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:06:46.839521    8934 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:06:46.846842    8934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 11:06:46.886692    8934 start.go:159] libmachine.API.Create for "force-systemd-flag-743000" (driver="qemu2")
	I0327 11:06:46.886743    8934 client.go:168] LocalClient.Create starting
	I0327 11:06:46.886864    8934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:06:46.886920    8934 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:46.886936    8934 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:46.886994    8934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:06:46.887035    8934 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:46.887044    8934 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:46.888024    8934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:06:47.049389    8934 main.go:141] libmachine: Creating SSH key...
	I0327 11:06:47.171517    8934 main.go:141] libmachine: Creating Disk image...
	I0327 11:06:47.171527    8934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:06:47.171703    8934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0327 11:06:47.184313    8934 main.go:141] libmachine: STDOUT: 
	I0327 11:06:47.184332    8934 main.go:141] libmachine: STDERR: 
	I0327 11:06:47.184381    8934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2 +20000M
	I0327 11:06:47.195164    8934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:06:47.195273    8934 main.go:141] libmachine: STDERR: 
	I0327 11:06:47.195284    8934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0327 11:06:47.195288    8934 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:06:47.195310    8934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b8:51:ec:5f:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0327 11:06:47.197002    8934 main.go:141] libmachine: STDOUT: 
	I0327 11:06:47.197055    8934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:06:47.197068    8934 client.go:171] duration metric: took 310.320625ms to LocalClient.Create
	I0327 11:06:49.199349    8934 start.go:128] duration metric: took 2.359732334s to createHost
	I0327 11:06:49.199434    8934 start.go:83] releasing machines lock for "force-systemd-flag-743000", held for 2.36017525s
	W0327 11:06:49.199782    8934 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-743000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-743000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:49.213438    8934 out.go:177] 
	W0327 11:06:49.217620    8934 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:06:49.217645    8934 out.go:239] * 
	* 
	W0327 11:06:49.219989    8934 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:06:49.232305    8934 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-743000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-743000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-743000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.933584ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-743000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-743000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-743000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-27 11:06:49.331752 -0700 PDT m=+925.139943043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-743000 -n force-systemd-flag-743000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-743000 -n force-systemd-flag-743000: exit status 7 (37.852208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-743000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-743000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-743000
--- FAIL: TestForceSystemdFlag (10.16s)

                                                
                                    
x
+
TestForceSystemdEnv (10.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-226000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-226000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.012793s)

                                                
                                                
-- stdout --
	* [force-systemd-env-226000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-226000" primary control-plane node in "force-systemd-env-226000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-226000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:06:34.200613    8898 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:06:34.200735    8898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:06:34.200739    8898 out.go:304] Setting ErrFile to fd 2...
	I0327 11:06:34.200742    8898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:06:34.200880    8898 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:06:34.201966    8898 out.go:298] Setting JSON to false
	I0327 11:06:34.218449    8898 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5767,"bootTime":1711557027,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:06:34.218516    8898 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:06:34.224182    8898 out.go:177] * [force-systemd-env-226000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:06:34.237152    8898 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:06:34.233246    8898 notify.go:220] Checking for updates...
	I0327 11:06:34.245216    8898 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:06:34.253171    8898 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:06:34.261134    8898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:06:34.269118    8898 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:06:34.276992    8898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0327 11:06:34.281503    8898 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:06:34.281555    8898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:06:34.287040    8898 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:06:34.294207    8898 start.go:297] selected driver: qemu2
	I0327 11:06:34.294214    8898 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:06:34.294220    8898 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:06:34.296624    8898 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:06:34.299116    8898 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:06:34.303226    8898 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 11:06:34.303266    8898 cni.go:84] Creating CNI manager for ""
	I0327 11:06:34.303274    8898 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:06:34.303279    8898 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:06:34.303313    8898 start.go:340] cluster config:
	{Name:force-systemd-env-226000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-226000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:06:34.308016    8898 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:06:34.311216    8898 out.go:177] * Starting "force-systemd-env-226000" primary control-plane node in "force-systemd-env-226000" cluster
	I0327 11:06:34.319194    8898 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:06:34.319207    8898 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:06:34.319213    8898 cache.go:56] Caching tarball of preloaded images
	I0327 11:06:34.319260    8898 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:06:34.319265    8898 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:06:34.319324    8898 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/force-systemd-env-226000/config.json ...
	I0327 11:06:34.319336    8898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/force-systemd-env-226000/config.json: {Name:mk400df6135d6c746c4363523ab2717443deea00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:06:34.319631    8898 start.go:360] acquireMachinesLock for force-systemd-env-226000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:06:34.319661    8898 start.go:364] duration metric: took 23.417µs to acquireMachinesLock for "force-systemd-env-226000"
	I0327 11:06:34.319675    8898 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-226000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:force-systemd-env-226000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:06:34.319699    8898 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:06:34.324162    8898 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 11:06:34.339890    8898 start.go:159] libmachine.API.Create for "force-systemd-env-226000" (driver="qemu2")
	I0327 11:06:34.339917    8898 client.go:168] LocalClient.Create starting
	I0327 11:06:34.339971    8898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:06:34.339998    8898 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:34.340006    8898 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:34.340045    8898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:06:34.340066    8898 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:34.340071    8898 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:34.340432    8898 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:06:34.514975    8898 main.go:141] libmachine: Creating SSH key...
	I0327 11:06:34.638910    8898 main.go:141] libmachine: Creating Disk image...
	I0327 11:06:34.638920    8898 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:06:34.639117    8898 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2
	I0327 11:06:34.651792    8898 main.go:141] libmachine: STDOUT: 
	I0327 11:06:34.651827    8898 main.go:141] libmachine: STDERR: 
	I0327 11:06:34.651881    8898 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2 +20000M
	I0327 11:06:34.662933    8898 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:06:34.662951    8898 main.go:141] libmachine: STDERR: 
	I0327 11:06:34.662965    8898 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2
	I0327 11:06:34.662970    8898 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:06:34.662999    8898 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:b0:3f:99:3d:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2
	I0327 11:06:34.664828    8898 main.go:141] libmachine: STDOUT: 
	I0327 11:06:34.664845    8898 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:06:34.664864    8898 client.go:171] duration metric: took 324.944833ms to LocalClient.Create
	I0327 11:06:36.667315    8898 start.go:128] duration metric: took 2.347606083s to createHost
	I0327 11:06:36.667513    8898 start.go:83] releasing machines lock for "force-systemd-env-226000", held for 2.347852583s
	W0327 11:06:36.667586    8898 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:36.675705    8898 out.go:177] * Deleting "force-systemd-env-226000" in qemu2 ...
	W0327 11:06:36.707190    8898 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:36.707227    8898 start.go:728] Will try again in 5 seconds ...
	I0327 11:06:41.709447    8898 start.go:360] acquireMachinesLock for force-systemd-env-226000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:06:41.786588    8898 start.go:364] duration metric: took 76.989541ms to acquireMachinesLock for "force-systemd-env-226000"
	I0327 11:06:41.786788    8898 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-226000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:force-systemd-env-226000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:06:41.787101    8898 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:06:41.802764    8898 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 11:06:41.852590    8898 start.go:159] libmachine.API.Create for "force-systemd-env-226000" (driver="qemu2")
	I0327 11:06:41.852638    8898 client.go:168] LocalClient.Create starting
	I0327 11:06:41.852766    8898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:06:41.852821    8898 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:41.852838    8898 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:41.852898    8898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:06:41.852938    8898 main.go:141] libmachine: Decoding PEM data...
	I0327 11:06:41.852951    8898 main.go:141] libmachine: Parsing certificate...
	I0327 11:06:41.853637    8898 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:06:42.025406    8898 main.go:141] libmachine: Creating SSH key...
	I0327 11:06:42.103424    8898 main.go:141] libmachine: Creating Disk image...
	I0327 11:06:42.103435    8898 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:06:42.103592    8898 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2
	I0327 11:06:42.115835    8898 main.go:141] libmachine: STDOUT: 
	I0327 11:06:42.115855    8898 main.go:141] libmachine: STDERR: 
	I0327 11:06:42.115907    8898 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2 +20000M
	I0327 11:06:42.126544    8898 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:06:42.126572    8898 main.go:141] libmachine: STDERR: 
	I0327 11:06:42.126590    8898 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2
	I0327 11:06:42.126600    8898 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:06:42.126638    8898 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:8a:ad:22:33:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/force-systemd-env-226000/disk.qcow2
	I0327 11:06:42.128468    8898 main.go:141] libmachine: STDOUT: 
	I0327 11:06:42.128485    8898 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:06:42.128498    8898 client.go:171] duration metric: took 275.857042ms to LocalClient.Create
	I0327 11:06:44.130703    8898 start.go:128] duration metric: took 2.343575584s to createHost
	I0327 11:06:44.130796    8898 start.go:83] releasing machines lock for "force-systemd-env-226000", held for 2.34419225s
	W0327 11:06:44.131286    8898 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-226000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-226000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:06:44.145957    8898 out.go:177] 
	W0327 11:06:44.151075    8898 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:06:44.151099    8898 out.go:239] * 
	* 
	W0327 11:06:44.153721    8898 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:06:44.166882    8898 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-226000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-226000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-226000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.348917ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-226000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-226000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-226000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-27 11:06:44.262631 -0700 PDT m=+920.070789793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-226000 -n force-systemd-env-226000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-226000 -n force-systemd-env-226000: exit status 7 (34.899833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-226000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-226000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-226000
--- FAIL: TestForceSystemdEnv (10.23s)

                                                
                                    
x
+
TestErrorSpam/setup (10.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-472000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-472000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 --driver=qemu2 : exit status 80 (10.084565833s)

                                                
                                                
-- stdout --
	* [nospam-472000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-472000" primary control-plane node in "nospam-472000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-472000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-472000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-472000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18517
- KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-472000" primary control-plane node in "nospam-472000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-472000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (10.09s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-821000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-821000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.870545208s)

                                                
                                                
-- stdout --
	* [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-821000" primary control-plane node in "functional-821000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-821000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50944 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50944 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50944 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-821000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-821000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18517
- KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-821000" primary control-plane node in "functional-821000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-821000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50944 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50944 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50944 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-821000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (66.055084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-821000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-821000 --alsologtostderr -v=8: exit status 80 (5.200332666s)

                                                
                                                
-- stdout --
	* [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-821000" primary control-plane node in "functional-821000" cluster
	* Restarting existing qemu2 VM for "functional-821000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-821000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 10:55:21.431428    7026 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:55:21.431545    7026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:55:21.431548    7026 out.go:304] Setting ErrFile to fd 2...
	I0327 10:55:21.431550    7026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:55:21.431681    7026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:55:21.432616    7026 out.go:298] Setting JSON to false
	I0327 10:55:21.448906    7026 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5094,"bootTime":1711557027,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:55:21.448971    7026 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:55:21.454455    7026 out.go:177] * [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 10:55:21.466213    7026 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 10:55:21.462410    7026 notify.go:220] Checking for updates...
	I0327 10:55:21.472361    7026 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:55:21.475388    7026 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:55:21.478382    7026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:55:21.481395    7026 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 10:55:21.484390    7026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 10:55:21.487600    7026 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 10:55:21.487666    7026 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:55:21.492336    7026 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 10:55:21.499377    7026 start.go:297] selected driver: qemu2
	I0327 10:55:21.499384    7026 start.go:901] validating driver "qemu2" against &{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:funct
ional-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:55:21.499457    7026 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 10:55:21.501754    7026 cni.go:84] Creating CNI manager for ""
	I0327 10:55:21.501770    7026 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 10:55:21.501816    7026 start.go:340] cluster config:
	{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:55:21.506233    7026 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 10:55:21.514193    7026 out.go:177] * Starting "functional-821000" primary control-plane node in "functional-821000" cluster
	I0327 10:55:21.518378    7026 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 10:55:21.518394    7026 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 10:55:21.518407    7026 cache.go:56] Caching tarball of preloaded images
	I0327 10:55:21.518468    7026 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 10:55:21.518474    7026 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 10:55:21.518541    7026 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/functional-821000/config.json ...
	I0327 10:55:21.519007    7026 start.go:360] acquireMachinesLock for functional-821000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 10:55:21.519039    7026 start.go:364] duration metric: took 25.541µs to acquireMachinesLock for "functional-821000"
	I0327 10:55:21.519048    7026 start.go:96] Skipping create...Using existing machine configuration
	I0327 10:55:21.519053    7026 fix.go:54] fixHost starting: 
	I0327 10:55:21.519167    7026 fix.go:112] recreateIfNeeded on functional-821000: state=Stopped err=<nil>
	W0327 10:55:21.519176    7026 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 10:55:21.522402    7026 out.go:177] * Restarting existing qemu2 VM for "functional-821000" ...
	I0327 10:55:21.530397    7026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ff:5c:ab:e4:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/disk.qcow2
	I0327 10:55:21.532467    7026 main.go:141] libmachine: STDOUT: 
	I0327 10:55:21.532488    7026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 10:55:21.532516    7026 fix.go:56] duration metric: took 13.462125ms for fixHost
	I0327 10:55:21.532520    7026 start.go:83] releasing machines lock for "functional-821000", held for 13.477042ms
	W0327 10:55:21.532528    7026 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 10:55:21.532558    7026 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:55:21.532563    7026 start.go:728] Will try again in 5 seconds ...
	I0327 10:55:26.534665    7026 start.go:360] acquireMachinesLock for functional-821000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 10:55:26.534986    7026 start.go:364] duration metric: took 234.042µs to acquireMachinesLock for "functional-821000"
	I0327 10:55:26.535109    7026 start.go:96] Skipping create...Using existing machine configuration
	I0327 10:55:26.535133    7026 fix.go:54] fixHost starting: 
	I0327 10:55:26.535830    7026 fix.go:112] recreateIfNeeded on functional-821000: state=Stopped err=<nil>
	W0327 10:55:26.535859    7026 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 10:55:26.541371    7026 out.go:177] * Restarting existing qemu2 VM for "functional-821000" ...
	I0327 10:55:26.549515    7026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ff:5c:ab:e4:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/disk.qcow2
	I0327 10:55:26.559811    7026 main.go:141] libmachine: STDOUT: 
	I0327 10:55:26.559879    7026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 10:55:26.559952    7026 fix.go:56] duration metric: took 24.825542ms for fixHost
	I0327 10:55:26.559967    7026 start.go:83] releasing machines lock for "functional-821000", held for 24.960041ms
	W0327 10:55:26.560168    7026 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-821000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-821000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:55:26.567312    7026 out.go:177] 
	W0327 10:55:26.571220    7026 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 10:55:26.571288    7026 out.go:239] * 
	* 
	W0327 10:55:26.574005    7026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 10:55:26.582281    7026 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-821000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.202125458s for "functional-821000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (69.561667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.375375ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-821000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (32.142666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-821000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-821000 get po -A: exit status 1 (27.087417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-821000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-821000\n"*: args "kubectl --context functional-821000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-821000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (32.518875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh sudo crictl images: exit status 83 (40.920958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-821000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.172584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-821000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.785417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.025292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-821000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 kubectl -- --context functional-821000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 kubectl -- --context functional-821000 get pods: exit status 1 (659.821542ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-821000
	* no server found for cluster "functional-821000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-821000 kubectl -- --context functional-821000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (34.269792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-821000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-821000 get pods: exit status 1 (898.610291ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-821000
	* no server found for cluster "functional-821000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-821000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (31.809208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-821000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-821000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.199458458s)

                                                
                                                
-- stdout --
	* [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-821000" primary control-plane node in "functional-821000" cluster
	* Restarting existing qemu2 VM for "functional-821000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-821000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-821000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-821000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.199881917s for "functional-821000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (68.400209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-821000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-821000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.807625ms)

                                                
                                                
** stderr ** 
	error: context "functional-821000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-821000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (32.094791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 logs: exit status 83 (78.471708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:51 PDT |                     |
	|         | -p download-only-853000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
	| delete  | -p download-only-853000                                                  | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
	| start   | -o=json --download-only                                                  | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT |                     |
	|         | -p download-only-089000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| start   | -o=json --download-only                                                  | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
	|         | -p download-only-275000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| delete  | -p download-only-275000                                                  | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| delete  | -p download-only-853000                                                  | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| delete  | -p download-only-275000                                                  | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| start   | --download-only -p                                                       | binary-mirror-443000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
	|         | binary-mirror-443000                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
	|         | --binary-mirror                                                          |                      |         |                |                     |                     |
	|         | http://127.0.0.1:50912                                                   |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-443000                                                  | binary-mirror-443000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| addons  | disable dashboard -p                                                     | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
	|         | addons-723000                                                            |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
	|         | addons-723000                                                            |                      |         |                |                     |                     |
	| start   | -p addons-723000 --wait=true                                             | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
	|         | --addons=registry                                                        |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
	| delete  | -p addons-723000                                                         | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| start   | -p nospam-472000 -n=1 --memory=2250 --wait=false                         | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| start   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| pause   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| unpause | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| stop    | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| delete  | -p nospam-472000                                                         | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	| start   | -p functional-821000                                                     | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | --memory=4000                                                            |                      |         |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
	| start   | -p functional-821000                                                     | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
	| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | minikube-local-cache-test:functional-821000                              |                      |         |                |                     |                     |
	| cache   | functional-821000 cache delete                                           | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | minikube-local-cache-test:functional-821000                              |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	| ssh     | functional-821000 ssh sudo                                               | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | crictl images                                                            |                      |         |                |                     |                     |
	| ssh     | functional-821000                                                        | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| ssh     | functional-821000 ssh                                                    | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-821000 cache reload                                           | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	| ssh     | functional-821000 ssh                                                    | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| kubectl | functional-821000 kubectl --                                             | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | --context functional-821000                                              |                      |         |                |                     |                     |
	|         | get pods                                                                 |                      |         |                |                     |                     |
	| start   | -p functional-821000                                                     | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
	|         | --wait=all                                                               |                      |         |                |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 10:55:36
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 10:55:36.315644    7113 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:55:36.315787    7113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:55:36.315789    7113 out.go:304] Setting ErrFile to fd 2...
	I0327 10:55:36.315791    7113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:55:36.315897    7113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:55:36.316877    7113 out.go:298] Setting JSON to false
	I0327 10:55:36.333234    7113 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5109,"bootTime":1711557027,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:55:36.333290    7113 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:55:36.339377    7113 out.go:177] * [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 10:55:36.348304    7113 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 10:55:36.352120    7113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:55:36.348330    7113 notify.go:220] Checking for updates...
	I0327 10:55:36.359241    7113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:55:36.360768    7113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:55:36.364256    7113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 10:55:36.367302    7113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 10:55:36.370582    7113 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 10:55:36.370633    7113 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:55:36.375252    7113 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 10:55:36.384235    7113 start.go:297] selected driver: qemu2
	I0327 10:55:36.384238    7113 start.go:901] validating driver "qemu2" against &{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:funct
ional-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:55:36.384290    7113 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 10:55:36.386618    7113 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 10:55:36.386659    7113 cni.go:84] Creating CNI manager for ""
	I0327 10:55:36.386667    7113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 10:55:36.386708    7113 start.go:340] cluster config:
	{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:55:36.391287    7113 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 10:55:36.401264    7113 out.go:177] * Starting "functional-821000" primary control-plane node in "functional-821000" cluster
	I0327 10:55:36.407293    7113 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 10:55:36.407309    7113 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 10:55:36.407317    7113 cache.go:56] Caching tarball of preloaded images
	I0327 10:55:36.407378    7113 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 10:55:36.407387    7113 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 10:55:36.407460    7113 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/functional-821000/config.json ...
	I0327 10:55:36.408114    7113 start.go:360] acquireMachinesLock for functional-821000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 10:55:36.408149    7113 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "functional-821000"
	I0327 10:55:36.408158    7113 start.go:96] Skipping create...Using existing machine configuration
	I0327 10:55:36.408163    7113 fix.go:54] fixHost starting: 
	I0327 10:55:36.408294    7113 fix.go:112] recreateIfNeeded on functional-821000: state=Stopped err=<nil>
	W0327 10:55:36.408301    7113 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 10:55:36.419189    7113 out.go:177] * Restarting existing qemu2 VM for "functional-821000" ...
	I0327 10:55:36.422305    7113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ff:5c:ab:e4:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/disk.qcow2
	I0327 10:55:36.424538    7113 main.go:141] libmachine: STDOUT: 
	I0327 10:55:36.424561    7113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 10:55:36.424595    7113 fix.go:56] duration metric: took 16.431417ms for fixHost
	I0327 10:55:36.424599    7113 start.go:83] releasing machines lock for "functional-821000", held for 16.446167ms
	W0327 10:55:36.424605    7113 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 10:55:36.424634    7113 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:55:36.424639    7113 start.go:728] Will try again in 5 seconds ...
	I0327 10:55:41.426872    7113 start.go:360] acquireMachinesLock for functional-821000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 10:55:41.427280    7113 start.go:364] duration metric: took 317.625µs to acquireMachinesLock for "functional-821000"
	I0327 10:55:41.427460    7113 start.go:96] Skipping create...Using existing machine configuration
	I0327 10:55:41.427473    7113 fix.go:54] fixHost starting: 
	I0327 10:55:41.428237    7113 fix.go:112] recreateIfNeeded on functional-821000: state=Stopped err=<nil>
	W0327 10:55:41.428258    7113 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 10:55:41.436719    7113 out.go:177] * Restarting existing qemu2 VM for "functional-821000" ...
	I0327 10:55:41.440995    7113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ff:5c:ab:e4:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/disk.qcow2
	I0327 10:55:41.451272    7113 main.go:141] libmachine: STDOUT: 
	I0327 10:55:41.451332    7113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 10:55:41.451440    7113 fix.go:56] duration metric: took 23.967708ms for fixHost
	I0327 10:55:41.451464    7113 start.go:83] releasing machines lock for "functional-821000", held for 24.116333ms
	W0327 10:55:41.451637    7113 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-821000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:55:41.456714    7113 out.go:177] 
	W0327 10:55:41.460858    7113 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 10:55:41.460877    7113 out.go:239] * 
	W0327 10:55:41.463311    7113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 10:55:41.470591    7113 out.go:177] 
	
	
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-821000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:51 PDT |                     |
|         | -p download-only-853000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
| delete  | -p download-only-853000                                                  | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
| start   | -o=json --download-only                                                  | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT |                     |
|         | -p download-only-089000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| start   | -o=json --download-only                                                  | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | -p download-only-275000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-275000                                                  | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-853000                                                  | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-275000                                                  | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| start   | --download-only -p                                                       | binary-mirror-443000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | binary-mirror-443000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:50912                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-443000                                                  | binary-mirror-443000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| addons  | disable dashboard -p                                                     | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | addons-723000                                                            |                      |         |                |                     |                     |
| addons  | enable dashboard -p                                                      | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | addons-723000                                                            |                      |         |                |                     |                     |
| start   | -p addons-723000 --wait=true                                             | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-723000                                                         | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| start   | -p nospam-472000 -n=1 --memory=2250 --wait=false                         | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-472000                                                         | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
| start   | -p functional-821000                                                     | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-821000                                                     | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | minikube-local-cache-test:functional-821000                              |                      |         |                |                     |                     |
| cache   | functional-821000 cache delete                                           | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | minikube-local-cache-test:functional-821000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
| ssh     | functional-821000 ssh sudo                                               | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-821000                                                        | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-821000 ssh                                                    | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-821000 cache reload                                           | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
| ssh     | functional-821000 ssh                                                    | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-821000 kubectl --                                             | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | --context functional-821000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-821000                                                     | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/27 10:55:36
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0327 10:55:36.315644    7113 out.go:291] Setting OutFile to fd 1 ...
I0327 10:55:36.315787    7113 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:55:36.315789    7113 out.go:304] Setting ErrFile to fd 2...
I0327 10:55:36.315791    7113 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:55:36.315897    7113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 10:55:36.316877    7113 out.go:298] Setting JSON to false
I0327 10:55:36.333234    7113 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5109,"bootTime":1711557027,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0327 10:55:36.333290    7113 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0327 10:55:36.339377    7113 out.go:177] * [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0327 10:55:36.348304    7113 out.go:177]   - MINIKUBE_LOCATION=18517
I0327 10:55:36.352120    7113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
I0327 10:55:36.348330    7113 notify.go:220] Checking for updates...
I0327 10:55:36.359241    7113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0327 10:55:36.360768    7113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0327 10:55:36.364256    7113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
I0327 10:55:36.367302    7113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0327 10:55:36.370582    7113 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:55:36.370633    7113 driver.go:392] Setting default libvirt URI to qemu:///system
I0327 10:55:36.375252    7113 out.go:177] * Using the qemu2 driver based on existing profile
I0327 10:55:36.384235    7113 start.go:297] selected driver: qemu2
I0327 10:55:36.384238    7113 start.go:901] validating driver "qemu2" against &{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functi
onal-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 10:55:36.384290    7113 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0327 10:55:36.386618    7113 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0327 10:55:36.386659    7113 cni.go:84] Creating CNI manager for ""
I0327 10:55:36.386667    7113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0327 10:55:36.386708    7113 start.go:340] cluster config:
{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 10:55:36.391287    7113 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 10:55:36.401264    7113 out.go:177] * Starting "functional-821000" primary control-plane node in "functional-821000" cluster
I0327 10:55:36.407293    7113 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0327 10:55:36.407309    7113 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0327 10:55:36.407317    7113 cache.go:56] Caching tarball of preloaded images
I0327 10:55:36.407378    7113 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0327 10:55:36.407387    7113 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0327 10:55:36.407460    7113 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/functional-821000/config.json ...
I0327 10:55:36.408114    7113 start.go:360] acquireMachinesLock for functional-821000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 10:55:36.408149    7113 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "functional-821000"
I0327 10:55:36.408158    7113 start.go:96] Skipping create...Using existing machine configuration
I0327 10:55:36.408163    7113 fix.go:54] fixHost starting: 
I0327 10:55:36.408294    7113 fix.go:112] recreateIfNeeded on functional-821000: state=Stopped err=<nil>
W0327 10:55:36.408301    7113 fix.go:138] unexpected machine state, will restart: <nil>
I0327 10:55:36.419189    7113 out.go:177] * Restarting existing qemu2 VM for "functional-821000" ...
I0327 10:55:36.422305    7113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ff:5c:ab:e4:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/disk.qcow2
I0327 10:55:36.424538    7113 main.go:141] libmachine: STDOUT: 
I0327 10:55:36.424561    7113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 10:55:36.424595    7113 fix.go:56] duration metric: took 16.431417ms for fixHost
I0327 10:55:36.424599    7113 start.go:83] releasing machines lock for "functional-821000", held for 16.446167ms
W0327 10:55:36.424605    7113 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 10:55:36.424634    7113 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 10:55:36.424639    7113 start.go:728] Will try again in 5 seconds ...
I0327 10:55:41.426872    7113 start.go:360] acquireMachinesLock for functional-821000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 10:55:41.427280    7113 start.go:364] duration metric: took 317.625µs to acquireMachinesLock for "functional-821000"
I0327 10:55:41.427460    7113 start.go:96] Skipping create...Using existing machine configuration
I0327 10:55:41.427473    7113 fix.go:54] fixHost starting: 
I0327 10:55:41.428237    7113 fix.go:112] recreateIfNeeded on functional-821000: state=Stopped err=<nil>
W0327 10:55:41.428258    7113 fix.go:138] unexpected machine state, will restart: <nil>
I0327 10:55:41.436719    7113 out.go:177] * Restarting existing qemu2 VM for "functional-821000" ...
I0327 10:55:41.440995    7113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ff:5c:ab:e4:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/disk.qcow2
I0327 10:55:41.451272    7113 main.go:141] libmachine: STDOUT: 
I0327 10:55:41.451332    7113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 10:55:41.451440    7113 fix.go:56] duration metric: took 23.967708ms for fixHost
I0327 10:55:41.451464    7113 start.go:83] releasing machines lock for "functional-821000", held for 24.116333ms
W0327 10:55:41.451637    7113 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-821000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 10:55:41.456714    7113 out.go:177] 
W0327 10:55:41.460858    7113 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 10:55:41.460877    7113 out.go:239] * 
W0327 10:55:41.463311    7113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 10:55:41.470591    7113 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd919427089/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:51 PDT |                     |
|         | -p download-only-853000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
| delete  | -p download-only-853000                                                  | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
| start   | -o=json --download-only                                                  | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT |                     |
|         | -p download-only-089000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| start   | -o=json --download-only                                                  | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | -p download-only-275000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-275000                                                  | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-853000                                                  | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| delete  | -p download-only-275000                                                  | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| start   | --download-only -p                                                       | binary-mirror-443000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | binary-mirror-443000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:50912                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-443000                                                  | binary-mirror-443000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| addons  | disable dashboard -p                                                     | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | addons-723000                                                            |                      |         |                |                     |                     |
| addons  | enable dashboard -p                                                      | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | addons-723000                                                            |                      |         |                |                     |                     |
| start   | -p addons-723000 --wait=true                                             | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-723000                                                         | addons-723000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
| start   | -p nospam-472000 -n=1 --memory=2250 --wait=false                         | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-472000 --log_dir                                                  | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-472000                                                         | nospam-472000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
| start   | -p functional-821000                                                     | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-821000                                                     | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-821000 cache add                                              | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | minikube-local-cache-test:functional-821000                              |                      |         |                |                     |                     |
| cache   | functional-821000 cache delete                                           | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | minikube-local-cache-test:functional-821000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
| ssh     | functional-821000 ssh sudo                                               | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-821000                                                        | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-821000 ssh                                                    | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-821000 cache reload                                           | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
| ssh     | functional-821000 ssh                                                    | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT | 27 Mar 24 10:55 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-821000 kubectl --                                             | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | --context functional-821000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-821000                                                     | functional-821000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:55 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/27 10:55:36
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0327 10:55:36.315644    7113 out.go:291] Setting OutFile to fd 1 ...
I0327 10:55:36.315787    7113 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:55:36.315789    7113 out.go:304] Setting ErrFile to fd 2...
I0327 10:55:36.315791    7113 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:55:36.315897    7113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 10:55:36.316877    7113 out.go:298] Setting JSON to false
I0327 10:55:36.333234    7113 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5109,"bootTime":1711557027,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0327 10:55:36.333290    7113 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0327 10:55:36.339377    7113 out.go:177] * [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0327 10:55:36.348304    7113 out.go:177]   - MINIKUBE_LOCATION=18517
I0327 10:55:36.352120    7113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
I0327 10:55:36.348330    7113 notify.go:220] Checking for updates...
I0327 10:55:36.359241    7113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0327 10:55:36.360768    7113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0327 10:55:36.364256    7113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
I0327 10:55:36.367302    7113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0327 10:55:36.370582    7113 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:55:36.370633    7113 driver.go:392] Setting default libvirt URI to qemu:///system
I0327 10:55:36.375252    7113 out.go:177] * Using the qemu2 driver based on existing profile
I0327 10:55:36.384235    7113 start.go:297] selected driver: qemu2
I0327 10:55:36.384238    7113 start.go:901] validating driver "qemu2" against &{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functi
onal-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 10:55:36.384290    7113 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0327 10:55:36.386618    7113 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0327 10:55:36.386659    7113 cni.go:84] Creating CNI manager for ""
I0327 10:55:36.386667    7113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0327 10:55:36.386708    7113 start.go:340] cluster config:
{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 10:55:36.391287    7113 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 10:55:36.401264    7113 out.go:177] * Starting "functional-821000" primary control-plane node in "functional-821000" cluster
I0327 10:55:36.407293    7113 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0327 10:55:36.407309    7113 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0327 10:55:36.407317    7113 cache.go:56] Caching tarball of preloaded images
I0327 10:55:36.407378    7113 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0327 10:55:36.407387    7113 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0327 10:55:36.407460    7113 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/functional-821000/config.json ...
I0327 10:55:36.408114    7113 start.go:360] acquireMachinesLock for functional-821000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 10:55:36.408149    7113 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "functional-821000"
I0327 10:55:36.408158    7113 start.go:96] Skipping create...Using existing machine configuration
I0327 10:55:36.408163    7113 fix.go:54] fixHost starting: 
I0327 10:55:36.408294    7113 fix.go:112] recreateIfNeeded on functional-821000: state=Stopped err=<nil>
W0327 10:55:36.408301    7113 fix.go:138] unexpected machine state, will restart: <nil>
I0327 10:55:36.419189    7113 out.go:177] * Restarting existing qemu2 VM for "functional-821000" ...
I0327 10:55:36.422305    7113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ff:5c:ab:e4:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/disk.qcow2
I0327 10:55:36.424538    7113 main.go:141] libmachine: STDOUT: 
I0327 10:55:36.424561    7113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 10:55:36.424595    7113 fix.go:56] duration metric: took 16.431417ms for fixHost
I0327 10:55:36.424599    7113 start.go:83] releasing machines lock for "functional-821000", held for 16.446167ms
W0327 10:55:36.424605    7113 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 10:55:36.424634    7113 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 10:55:36.424639    7113 start.go:728] Will try again in 5 seconds ...
I0327 10:55:41.426872    7113 start.go:360] acquireMachinesLock for functional-821000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 10:55:41.427280    7113 start.go:364] duration metric: took 317.625µs to acquireMachinesLock for "functional-821000"
I0327 10:55:41.427460    7113 start.go:96] Skipping create...Using existing machine configuration
I0327 10:55:41.427473    7113 fix.go:54] fixHost starting: 
I0327 10:55:41.428237    7113 fix.go:112] recreateIfNeeded on functional-821000: state=Stopped err=<nil>
W0327 10:55:41.428258    7113 fix.go:138] unexpected machine state, will restart: <nil>
I0327 10:55:41.436719    7113 out.go:177] * Restarting existing qemu2 VM for "functional-821000" ...
I0327 10:55:41.440995    7113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ff:5c:ab:e4:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/functional-821000/disk.qcow2
I0327 10:55:41.451272    7113 main.go:141] libmachine: STDOUT: 
I0327 10:55:41.451332    7113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 10:55:41.451440    7113 fix.go:56] duration metric: took 23.967708ms for fixHost
I0327 10:55:41.451464    7113 start.go:83] releasing machines lock for "functional-821000", held for 24.116333ms
W0327 10:55:41.451637    7113 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-821000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 10:55:41.456714    7113 out.go:177] 
W0327 10:55:41.460858    7113 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 10:55:41.460877    7113 out.go:239] * 
W0327 10:55:41.463311    7113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 10:55:41.470591    7113 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-821000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-821000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.348125ms)

                                                
                                                
** stderr ** 
	error: context "functional-821000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-821000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-821000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-821000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-821000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-821000 --alsologtostderr -v=1] stderr:
I0327 10:56:37.787904    7502 out.go:291] Setting OutFile to fd 1 ...
I0327 10:56:37.788300    7502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:37.788304    7502 out.go:304] Setting ErrFile to fd 2...
I0327 10:56:37.788307    7502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:37.788469    7502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 10:56:37.788685    7502 mustload.go:65] Loading cluster: functional-821000
I0327 10:56:37.788866    7502 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:56:37.793358    7502 out.go:177] * The control-plane node functional-821000 host is not running: state=Stopped
I0327 10:56:37.797485    7502 out.go:177]   To start a cluster, run: "minikube start -p functional-821000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (45.1915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 status: exit status 7 (31.891708ms)

                                                
                                                
-- stdout --
	functional-821000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-821000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.369916ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-821000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 status -o json: exit status 7 (32.184292ms)

                                                
                                                
-- stdout --
	{"Name":"functional-821000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-821000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (31.670875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-821000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-821000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.798792ms)

                                                
                                                
** stderr ** 
	error: context "functional-821000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-821000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-821000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-821000 describe po hello-node-connect: exit status 1 (26.658334ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-821000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-821000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-821000 logs -l app=hello-node-connect: exit status 1 (26.733792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-821000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-821000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-821000 describe svc hello-node-connect: exit status 1 (26.629042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-821000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (32.330042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-821000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (32.514917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "echo hello": exit status 83 (43.79825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"\n"*. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "cat /etc/hostname": exit status 83 (43.956292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-821000"- but got *"* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"\n"*. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (31.969417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (56.339583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-821000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh -n functional-821000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh -n functional-821000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.962917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-821000 ssh -n functional-821000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-821000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-821000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 cp functional-821000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2738404438/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 cp functional-821000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2738404438/001/cp-test.txt: exit status 83 (44.426334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-821000 cp functional-821000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2738404438/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh -n functional-821000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh -n functional-821000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.917625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-821000 ssh -n functional-821000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2738404438/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (50.6935ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-821000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh -n functional-821000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh -n functional-821000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (43.786917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-821000 ssh -n functional-821000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-821000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-821000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6476/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/test/nested/copy/6476/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/test/nested/copy/6476/hosts": exit status 83 (42.655709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/test/nested/copy/6476/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-821000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-821000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (32.09975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6476.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/ssl/certs/6476.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/ssl/certs/6476.pem": exit status 83 (43.970291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6476.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"sudo cat /etc/ssl/certs/6476.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6476.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-821000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-821000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6476.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /usr/share/ca-certificates/6476.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /usr/share/ca-certificates/6476.pem": exit status 83 (49.634667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6476.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"sudo cat /usr/share/ca-certificates/6476.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6476.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-821000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-821000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (45.73725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-821000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-821000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/64762.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/ssl/certs/64762.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/ssl/certs/64762.pem": exit status 83 (40.996416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/64762.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"sudo cat /etc/ssl/certs/64762.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/64762.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-821000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-821000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/64762.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /usr/share/ca-certificates/64762.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /usr/share/ca-certificates/64762.pem": exit status 83 (39.46425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/64762.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"sudo cat /usr/share/ca-certificates/64762.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/64762.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-821000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-821000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (43.7915ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-821000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-821000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-821000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (34.714584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-821000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-821000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.340375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-821000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-821000 -n functional-821000: exit status 7 (31.73925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo systemctl is-active crio": exit status 83 (40.691084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 version -o=json --components: exit status 83 (42.03375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-821000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-821000 image ls --format short --alsologtostderr:
I0327 10:56:38.204749    7517 out.go:291] Setting OutFile to fd 1 ...
I0327 10:56:38.204940    7517 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.204943    7517 out.go:304] Setting ErrFile to fd 2...
I0327 10:56:38.204946    7517 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.205067    7517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 10:56:38.205500    7517 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:56:38.205557    7517 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-821000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-821000 image ls --format table --alsologtostderr:
I0327 10:56:38.318724    7523 out.go:291] Setting OutFile to fd 1 ...
I0327 10:56:38.318899    7523 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.318902    7523 out.go:304] Setting ErrFile to fd 2...
I0327 10:56:38.318904    7523 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.319045    7523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 10:56:38.319515    7523 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:56:38.319577    7523 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-821000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-821000 image ls --format json --alsologtostderr:
I0327 10:56:38.281273    7521 out.go:291] Setting OutFile to fd 1 ...
I0327 10:56:38.281421    7521 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.281424    7521 out.go:304] Setting ErrFile to fd 2...
I0327 10:56:38.281427    7521 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.281540    7521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 10:56:38.281984    7521 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:56:38.282045    7521 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-821000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-821000 image ls --format yaml --alsologtostderr:
I0327 10:56:38.242925    7519 out.go:291] Setting OutFile to fd 1 ...
I0327 10:56:38.243088    7519 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.243092    7519 out.go:304] Setting ErrFile to fd 2...
I0327 10:56:38.243094    7519 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.243217    7519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 10:56:38.243644    7519 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:56:38.243705    7519 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh pgrep buildkitd: exit status 83 (44.621167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image build -t localhost/my-image:functional-821000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-821000 image build -t localhost/my-image:functional-821000 testdata/build --alsologtostderr:
I0327 10:56:38.401886    7527 out.go:291] Setting OutFile to fd 1 ...
I0327 10:56:38.402238    7527 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.402242    7527 out.go:304] Setting ErrFile to fd 2...
I0327 10:56:38.402244    7527 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:56:38.402407    7527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 10:56:38.402801    7527 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:56:38.403263    7527 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:56:38.403479    7527 build_images.go:133] succeeded building to: 
I0327 10:56:38.403482    7527 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls
functional_test.go:442: expected "localhost/my-image:functional-821000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-821000 docker-env) && out/minikube-darwin-arm64 status -p functional-821000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-821000 docker-env) && out/minikube-darwin-arm64 status -p functional-821000": exit status 1 (53.183167ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 update-context --alsologtostderr -v=2: exit status 83 (43.717166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 10:56:38.072144    7511 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:56:38.073189    7511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:38.073194    7511 out.go:304] Setting ErrFile to fd 2...
	I0327 10:56:38.073197    7511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:38.073363    7511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:56:38.073597    7511 mustload.go:65] Loading cluster: functional-821000
	I0327 10:56:38.073786    7511 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 10:56:38.076621    7511 out.go:177] * The control-plane node functional-821000 host is not running: state=Stopped
	I0327 10:56:38.080675    7511 out.go:177]   To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-821000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 update-context --alsologtostderr -v=2: exit status 83 (44.438292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 10:56:38.160296    7515 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:56:38.160450    7515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:38.160453    7515 out.go:304] Setting ErrFile to fd 2...
	I0327 10:56:38.160455    7515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:38.160565    7515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:56:38.160796    7515 mustload.go:65] Loading cluster: functional-821000
	I0327 10:56:38.161007    7515 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 10:56:38.165629    7515 out.go:177] * The control-plane node functional-821000 host is not running: state=Stopped
	I0327 10:56:38.169624    7515 out.go:177]   To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-821000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 update-context --alsologtostderr -v=2: exit status 83 (43.669042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 10:56:38.116775    7513 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:56:38.116932    7513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:38.116935    7513 out.go:304] Setting ErrFile to fd 2...
	I0327 10:56:38.116937    7513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:38.117087    7513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:56:38.117326    7513 mustload.go:65] Loading cluster: functional-821000
	I0327 10:56:38.117528    7513 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 10:56:38.121626    7513 out.go:177] * The control-plane node functional-821000 host is not running: state=Stopped
	I0327 10:56:38.124689    7513 out.go:177]   To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-821000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-821000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-821000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.010125ms)

                                                
                                                
** stderr ** 
	error: context "functional-821000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-821000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 service list: exit status 83 (45.3775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-821000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 service list -o json: exit status 83 (43.707541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-821000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 service --namespace=default --https --url hello-node: exit status 83 (44.659958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-821000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 service hello-node --url --format={{.IP}}: exit status 83 (45.677292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-821000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 service hello-node --url: exit status 83 (43.816166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-821000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test.go:1565: failed to parse "* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"": parse "* The control-plane node functional-821000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-821000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0327 10:55:44.540392    7238 out.go:291] Setting OutFile to fd 1 ...
I0327 10:55:44.540538    7238 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:55:44.540546    7238 out.go:304] Setting ErrFile to fd 2...
I0327 10:55:44.540548    7238 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 10:55:44.540674    7238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 10:55:44.540865    7238 mustload.go:65] Loading cluster: functional-821000
I0327 10:55:44.541104    7238 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 10:55:44.548997    7238 out.go:177] * The control-plane node functional-821000 host is not running: state=Stopped
I0327 10:55:44.556937    7238 out.go:177]   To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
stdout: * The control-plane node functional-821000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-821000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-821000": client config: context "functional-821000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (106.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-821000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-821000 get svc nginx-svc: exit status 1 (72.269417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-821000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-821000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (106.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image load --daemon gcr.io/google-containers/addon-resizer:functional-821000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-821000 image load --daemon gcr.io/google-containers/addon-resizer:functional-821000 --alsologtostderr: (1.285471708s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-821000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image load --daemon gcr.io/google-containers/addon-resizer:functional-821000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-821000 image load --daemon gcr.io/google-containers/addon-resizer:functional-821000 --alsologtostderr: (1.300066542s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-821000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.494255583s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-821000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image load --daemon gcr.io/google-containers/addon-resizer:functional-821000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-821000 image load --daemon gcr.io/google-containers/addon-resizer:functional-821000 --alsologtostderr: (1.162931292s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-821000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image save gcr.io/google-containers/addon-resizer:functional-821000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-821000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.0205425s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-637000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-637000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.973902667s)

                                                
                                                
-- stdout --
	* [ha-637000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-637000" primary control-plane node in "ha-637000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-637000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 10:58:35.573494    7662 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:58:35.573635    7662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:58:35.573638    7662 out.go:304] Setting ErrFile to fd 2...
	I0327 10:58:35.573641    7662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:58:35.573779    7662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:58:35.574857    7662 out.go:298] Setting JSON to false
	I0327 10:58:35.590965    7662 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5288,"bootTime":1711557027,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:58:35.591023    7662 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:58:35.597365    7662 out.go:177] * [ha-637000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 10:58:35.605514    7662 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 10:58:35.605556    7662 notify.go:220] Checking for updates...
	I0327 10:58:35.610447    7662 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:58:35.613473    7662 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:58:35.616525    7662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:58:35.619478    7662 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 10:58:35.622557    7662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 10:58:35.625655    7662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:58:35.629425    7662 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 10:58:35.636539    7662 start.go:297] selected driver: qemu2
	I0327 10:58:35.636545    7662 start.go:901] validating driver "qemu2" against <nil>
	I0327 10:58:35.636551    7662 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 10:58:35.638702    7662 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 10:58:35.641517    7662 out.go:177] * Automatically selected the socket_vmnet network
	I0327 10:58:35.644550    7662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 10:58:35.644603    7662 cni.go:84] Creating CNI manager for ""
	I0327 10:58:35.644608    7662 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0327 10:58:35.644612    7662 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 10:58:35.644644    7662 start.go:340] cluster config:
	{Name:ha-637000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-637000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var
/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:58:35.648737    7662 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 10:58:35.655421    7662 out.go:177] * Starting "ha-637000" primary control-plane node in "ha-637000" cluster
	I0327 10:58:35.659504    7662 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 10:58:35.659519    7662 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 10:58:35.659530    7662 cache.go:56] Caching tarball of preloaded images
	I0327 10:58:35.659587    7662 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 10:58:35.659593    7662 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 10:58:35.659849    7662 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/ha-637000/config.json ...
	I0327 10:58:35.659859    7662 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/ha-637000/config.json: {Name:mke5f4f6f1c0c3b98dd355c0ce3281044da52893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 10:58:35.660108    7662 start.go:360] acquireMachinesLock for ha-637000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 10:58:35.660137    7662 start.go:364] duration metric: took 23.834µs to acquireMachinesLock for "ha-637000"
	I0327 10:58:35.660149    7662 start.go:93] Provisioning new machine with config: &{Name:ha-637000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-637000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 10:58:35.660174    7662 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 10:58:35.664452    7662 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 10:58:35.679794    7662 start.go:159] libmachine.API.Create for "ha-637000" (driver="qemu2")
	I0327 10:58:35.679817    7662 client.go:168] LocalClient.Create starting
	I0327 10:58:35.679884    7662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 10:58:35.679913    7662 main.go:141] libmachine: Decoding PEM data...
	I0327 10:58:35.679923    7662 main.go:141] libmachine: Parsing certificate...
	I0327 10:58:35.679969    7662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 10:58:35.679991    7662 main.go:141] libmachine: Decoding PEM data...
	I0327 10:58:35.680000    7662 main.go:141] libmachine: Parsing certificate...
	I0327 10:58:35.680325    7662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 10:58:35.824046    7662 main.go:141] libmachine: Creating SSH key...
	I0327 10:58:36.085356    7662 main.go:141] libmachine: Creating Disk image...
	I0327 10:58:36.085367    7662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 10:58:36.085578    7662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 10:58:36.098635    7662 main.go:141] libmachine: STDOUT: 
	I0327 10:58:36.098656    7662 main.go:141] libmachine: STDERR: 
	I0327 10:58:36.098709    7662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2 +20000M
	I0327 10:58:36.109494    7662 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 10:58:36.109521    7662 main.go:141] libmachine: STDERR: 
	I0327 10:58:36.109539    7662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 10:58:36.109544    7662 main.go:141] libmachine: Starting QEMU VM...
	I0327 10:58:36.109578    7662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:63:7b:95:86:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 10:58:36.111343    7662 main.go:141] libmachine: STDOUT: 
	I0327 10:58:36.111358    7662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 10:58:36.111376    7662 client.go:171] duration metric: took 431.556208ms to LocalClient.Create
	I0327 10:58:38.113650    7662 start.go:128] duration metric: took 2.453462167s to createHost
	I0327 10:58:38.113743    7662 start.go:83] releasing machines lock for "ha-637000", held for 2.453611708s
	W0327 10:58:38.113808    7662 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:58:38.125072    7662 out.go:177] * Deleting "ha-637000" in qemu2 ...
	W0327 10:58:38.162347    7662 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:58:38.162396    7662 start.go:728] Will try again in 5 seconds ...
	I0327 10:58:43.164611    7662 start.go:360] acquireMachinesLock for ha-637000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 10:58:43.165002    7662 start.go:364] duration metric: took 304.542µs to acquireMachinesLock for "ha-637000"
	I0327 10:58:43.165125    7662 start.go:93] Provisioning new machine with config: &{Name:ha-637000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-637000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 10:58:43.165367    7662 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 10:58:43.176925    7662 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 10:58:43.221852    7662 start.go:159] libmachine.API.Create for "ha-637000" (driver="qemu2")
	I0327 10:58:43.221933    7662 client.go:168] LocalClient.Create starting
	I0327 10:58:43.222113    7662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 10:58:43.222189    7662 main.go:141] libmachine: Decoding PEM data...
	I0327 10:58:43.222208    7662 main.go:141] libmachine: Parsing certificate...
	I0327 10:58:43.222282    7662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 10:58:43.222326    7662 main.go:141] libmachine: Decoding PEM data...
	I0327 10:58:43.222345    7662 main.go:141] libmachine: Parsing certificate...
	I0327 10:58:43.222865    7662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 10:58:43.376183    7662 main.go:141] libmachine: Creating SSH key...
	I0327 10:58:43.435685    7662 main.go:141] libmachine: Creating Disk image...
	I0327 10:58:43.435691    7662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 10:58:43.435868    7662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 10:58:43.448110    7662 main.go:141] libmachine: STDOUT: 
	I0327 10:58:43.448128    7662 main.go:141] libmachine: STDERR: 
	I0327 10:58:43.448178    7662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2 +20000M
	I0327 10:58:43.458967    7662 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 10:58:43.458983    7662 main.go:141] libmachine: STDERR: 
	I0327 10:58:43.458992    7662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 10:58:43.458996    7662 main.go:141] libmachine: Starting QEMU VM...
	I0327 10:58:43.459024    7662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4b:5e:19:23:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 10:58:43.460762    7662 main.go:141] libmachine: STDOUT: 
	I0327 10:58:43.460778    7662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 10:58:43.460788    7662 client.go:171] duration metric: took 238.82325ms to LocalClient.Create
	I0327 10:58:45.462998    7662 start.go:128] duration metric: took 2.297601042s to createHost
	I0327 10:58:45.463134    7662 start.go:83] releasing machines lock for "ha-637000", held for 2.298080583s
	W0327 10:58:45.463507    7662 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-637000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-637000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 10:58:45.480071    7662 out.go:177] 
	W0327 10:58:45.483227    7662 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 10:58:45.483252    7662 out.go:239] * 
	* 
	W0327 10:58:45.485754    7662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 10:58:45.502128    7662 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-637000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (71.909958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (78.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (61.8695ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-637000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- rollout status deployment/busybox: exit status 1 (59.805583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.383625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.82975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.246584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.071084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.778959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.004625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.64225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.461791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.936416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.971666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.817041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.716792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.282291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.041625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (32.711667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (78.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-637000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.8275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-637000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (32.118125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-637000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-637000 -v=7 --alsologtostderr: exit status 83 (46.547375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-637000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-637000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:04.248043    7827 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:04.248636    7827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.248642    7827 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:04.248645    7827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.248807    7827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:04.249046    7827 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:04.249238    7827 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:04.253535    7827 out.go:177] * The control-plane node ha-637000 host is not running: state=Stopped
	I0327 11:00:04.258576    7827 out.go:177]   To start a cluster, run: "minikube start -p ha-637000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-637000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (31.804875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-637000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-637000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.354583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-637000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-637000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-637000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (32.059917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-637000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-637000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-637000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDr
iverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-637000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true
,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInt
erval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-637000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-637000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-637000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":nul
l,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-637000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Contro
lPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\
"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (31.89225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status --output json -v=7 --alsologtostderr: exit status 7 (31.803ms)

                                                
                                                
-- stdout --
	{"Name":"ha-637000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:04.490796    7841 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:04.490953    7841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.490956    7841 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:04.490959    7841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.491091    7841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:04.491222    7841 out.go:298] Setting JSON to true
	I0327 11:00:04.491234    7841 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:04.491302    7841 notify.go:220] Checking for updates...
	I0327 11:00:04.491449    7841 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:04.491455    7841 status.go:255] checking status of ha-637000 ...
	I0327 11:00:04.491653    7841 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:04.491657    7841 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:04.491659    7841 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-637000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (32.166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.933ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:04.555310    7845 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:04.555632    7845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.555635    7845 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:04.555638    7845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.555765    7845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:04.556015    7845 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:04.556194    7845 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:04.560499    7845 out.go:177] 
	W0327 11:00:04.563421    7845 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0327 11:00:04.563426    7845 out.go:239] * 
	* 
	W0327 11:00:04.565281    7845 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:00:04.570510    7845 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-637000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (32.400625ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:04.605937    7847 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:04.606102    7847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.606105    7847 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:04.606108    7847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.606235    7847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:04.606367    7847 out.go:298] Setting JSON to false
	I0327 11:00:04.606379    7847 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:04.606429    7847 notify.go:220] Checking for updates...
	I0327 11:00:04.606625    7847 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:04.606631    7847 status.go:255] checking status of ha-637000 ...
	I0327 11:00:04.606851    7847 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:04.606855    7847 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:04.606857    7847 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr": ha-637000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr": ha-637000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr": ha-637000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr": ha-637000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (32.512667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-637000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-637000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-637000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":
null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-637000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Con
trolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\
",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (32.292458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.352208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:04.776422    7857 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:04.776661    7857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.776664    7857 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:04.776666    7857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.776793    7857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:04.777038    7857 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:04.777221    7857 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:04.781686    7857 out.go:177] 
	W0327 11:00:04.784726    7857 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0327 11:00:04.784730    7857 out.go:239] * 
	* 
	W0327 11:00:04.786521    7857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:00:04.789630    7857 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0327 11:00:04.776422    7857 out.go:291] Setting OutFile to fd 1 ...
I0327 11:00:04.776661    7857 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 11:00:04.776664    7857 out.go:304] Setting ErrFile to fd 2...
I0327 11:00:04.776666    7857 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 11:00:04.776793    7857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 11:00:04.777038    7857 mustload.go:65] Loading cluster: ha-637000
I0327 11:00:04.777221    7857 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 11:00:04.781686    7857 out.go:177] 
W0327 11:00:04.784726    7857 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0327 11:00:04.784730    7857 out.go:239] * 
* 
W0327 11:00:04.786521    7857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 11:00:04.789630    7857 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-637000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (32.350959ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:04.825308    7859 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:04.825451    7859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.825454    7859 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:04.825456    7859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:04.825604    7859 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:04.825725    7859 out.go:298] Setting JSON to false
	I0327 11:00:04.825737    7859 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:04.825809    7859 notify.go:220] Checking for updates...
	I0327 11:00:04.825940    7859 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:04.825948    7859 status.go:255] checking status of ha-637000 ...
	I0327 11:00:04.826181    7859 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:04.826185    7859 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:04.826187    7859 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (78.603959ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:05.818156    7861 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:05.818321    7861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:05.818325    7861 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:05.818328    7861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:05.818491    7861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:05.818653    7861 out.go:298] Setting JSON to false
	I0327 11:00:05.818669    7861 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:05.818700    7861 notify.go:220] Checking for updates...
	I0327 11:00:05.818937    7861 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:05.818945    7861 status.go:255] checking status of ha-637000 ...
	I0327 11:00:05.819206    7861 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:05.819211    7861 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:05.819213    7861 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (76.28575ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:07.552696    7863 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:07.552887    7863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:07.552891    7863 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:07.552894    7863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:07.553053    7863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:07.553222    7863 out.go:298] Setting JSON to false
	I0327 11:00:07.553236    7863 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:07.553277    7863 notify.go:220] Checking for updates...
	I0327 11:00:07.553502    7863 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:07.553510    7863 status.go:255] checking status of ha-637000 ...
	I0327 11:00:07.553801    7863 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:07.553806    7863 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:07.553809    7863 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (75.933333ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:09.903292    7865 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:09.903480    7865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:09.903484    7865 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:09.903486    7865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:09.903628    7865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:09.903783    7865 out.go:298] Setting JSON to false
	I0327 11:00:09.903798    7865 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:09.903843    7865 notify.go:220] Checking for updates...
	I0327 11:00:09.904021    7865 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:09.904028    7865 status.go:255] checking status of ha-637000 ...
	I0327 11:00:09.904267    7865 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:09.904271    7865 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:09.904274    7865 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (76.157875ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:13.069695    7867 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:13.069901    7867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:13.069906    7867 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:13.069909    7867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:13.070071    7867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:13.070234    7867 out.go:298] Setting JSON to false
	I0327 11:00:13.070250    7867 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:13.070282    7867 notify.go:220] Checking for updates...
	I0327 11:00:13.070503    7867 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:13.070510    7867 status.go:255] checking status of ha-637000 ...
	I0327 11:00:13.070771    7867 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:13.070776    7867 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:13.070783    7867 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (75.203667ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:19.075117    7871 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:19.075271    7871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:19.075275    7871 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:19.075278    7871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:19.075442    7871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:19.075596    7871 out.go:298] Setting JSON to false
	I0327 11:00:19.075612    7871 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:19.075649    7871 notify.go:220] Checking for updates...
	I0327 11:00:19.075909    7871 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:19.075916    7871 status.go:255] checking status of ha-637000 ...
	I0327 11:00:19.076183    7871 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:19.076187    7871 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:19.076190    7871 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (77.432584ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:29.991253    7882 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:29.991463    7882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:29.991467    7882 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:29.991470    7882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:29.991624    7882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:29.991784    7882 out.go:298] Setting JSON to false
	I0327 11:00:29.991799    7882 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:29.991842    7882 notify.go:220] Checking for updates...
	I0327 11:00:29.992042    7882 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:29.992050    7882 status.go:255] checking status of ha-637000 ...
	I0327 11:00:29.992329    7882 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:29.992334    7882 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:29.992337    7882 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (76.299292ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:45.400491    7907 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:45.400714    7907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:45.400720    7907 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:45.400723    7907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:45.400895    7907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:45.401052    7907 out.go:298] Setting JSON to false
	I0327 11:00:45.401067    7907 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:45.401107    7907 notify.go:220] Checking for updates...
	I0327 11:00:45.401338    7907 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:45.401345    7907 status.go:255] checking status of ha-637000 ...
	I0327 11:00:45.401634    7907 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:45.401639    7907 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:45.401642    7907 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (70.686708ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:00:59.018215    7922 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:00:59.018417    7922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:59.018422    7922 out.go:304] Setting ErrFile to fd 2...
	I0327 11:00:59.018425    7922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:00:59.018597    7922 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:00:59.018793    7922 out.go:298] Setting JSON to false
	I0327 11:00:59.018811    7922 mustload.go:65] Loading cluster: ha-637000
	I0327 11:00:59.018854    7922 notify.go:220] Checking for updates...
	I0327 11:00:59.019078    7922 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:00:59.019087    7922 status.go:255] checking status of ha-637000 ...
	I0327 11:00:59.019375    7922 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:00:59.019380    7922 status.go:343] host is not running, skipping remaining checks
	I0327 11:00:59.019383    7922 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (36.377042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-637000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-637000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-637000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDr
iverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-637000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true
,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInt
erval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-637000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-637000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-637000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":nul
l,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-637000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Contro
lPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\
"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (31.309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-637000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-637000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-637000 -v=7 --alsologtostderr: (3.418770042s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-637000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-637000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.23203875s)

                                                
                                                
-- stdout --
	* [ha-637000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-637000" primary control-plane node in "ha-637000" cluster
	* Restarting existing qemu2 VM for "ha-637000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-637000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:01:02.677842    7961 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:01:02.678057    7961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:02.678062    7961 out.go:304] Setting ErrFile to fd 2...
	I0327 11:01:02.678065    7961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:02.678252    7961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:01:02.679604    7961 out.go:298] Setting JSON to false
	I0327 11:01:02.700845    7961 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5435,"bootTime":1711557027,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:01:02.700920    7961 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:01:02.706245    7961 out.go:177] * [ha-637000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:01:02.714101    7961 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:01:02.714115    7961 notify.go:220] Checking for updates...
	I0327 11:01:02.718233    7961 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:01:02.722225    7961 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:01:02.726137    7961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:01:02.729175    7961 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:01:02.732184    7961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:01:02.735526    7961 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:01:02.735591    7961 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:01:02.740168    7961 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:01:02.747262    7961 start.go:297] selected driver: qemu2
	I0327 11:01:02.747271    7961 start.go:901] validating driver "qemu2" against &{Name:ha-637000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-637000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:01:02.747360    7961 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:01:02.749993    7961 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:01:02.750048    7961 cni.go:84] Creating CNI manager for ""
	I0327 11:01:02.750054    7961 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 11:01:02.750122    7961 start.go:340] cluster config:
	{Name:ha-637000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-637000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:01:02.754738    7961 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:01:02.761168    7961 out.go:177] * Starting "ha-637000" primary control-plane node in "ha-637000" cluster
	I0327 11:01:02.765226    7961 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:01:02.765243    7961 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:01:02.765258    7961 cache.go:56] Caching tarball of preloaded images
	I0327 11:01:02.765326    7961 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:01:02.765333    7961 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:01:02.765418    7961 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/ha-637000/config.json ...
	I0327 11:01:02.765899    7961 start.go:360] acquireMachinesLock for ha-637000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:01:02.765938    7961 start.go:364] duration metric: took 32.166µs to acquireMachinesLock for "ha-637000"
	I0327 11:01:02.765949    7961 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:01:02.765955    7961 fix.go:54] fixHost starting: 
	I0327 11:01:02.766096    7961 fix.go:112] recreateIfNeeded on ha-637000: state=Stopped err=<nil>
	W0327 11:01:02.766105    7961 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:01:02.769175    7961 out.go:177] * Restarting existing qemu2 VM for "ha-637000" ...
	I0327 11:01:02.777226    7961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4b:5e:19:23:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 11:01:02.779469    7961 main.go:141] libmachine: STDOUT: 
	I0327 11:01:02.779495    7961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:01:02.779537    7961 fix.go:56] duration metric: took 13.582334ms for fixHost
	I0327 11:01:02.779542    7961 start.go:83] releasing machines lock for "ha-637000", held for 13.598959ms
	W0327 11:01:02.779551    7961 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:01:02.779588    7961 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:01:02.779594    7961 start.go:728] Will try again in 5 seconds ...
	I0327 11:01:07.781756    7961 start.go:360] acquireMachinesLock for ha-637000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:01:07.782317    7961 start.go:364] duration metric: took 420.333µs to acquireMachinesLock for "ha-637000"
	I0327 11:01:07.782500    7961 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:01:07.782522    7961 fix.go:54] fixHost starting: 
	I0327 11:01:07.783303    7961 fix.go:112] recreateIfNeeded on ha-637000: state=Stopped err=<nil>
	W0327 11:01:07.783329    7961 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:01:07.788756    7961 out.go:177] * Restarting existing qemu2 VM for "ha-637000" ...
	I0327 11:01:07.793953    7961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4b:5e:19:23:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 11:01:07.804344    7961 main.go:141] libmachine: STDOUT: 
	I0327 11:01:07.804422    7961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:01:07.804523    7961 fix.go:56] duration metric: took 22.00025ms for fixHost
	I0327 11:01:07.804543    7961 start.go:83] releasing machines lock for "ha-637000", held for 22.154084ms
	W0327 11:01:07.804748    7961 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-637000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-637000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:01:07.814754    7961 out.go:177] 
	W0327 11:01:07.817766    7961 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:01:07.817791    7961 out.go:239] * 
	* 
	W0327 11:01:07.820610    7961 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:01:07.829721    7961 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-637000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-637000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (34.5645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.6775ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-637000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-637000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:01:07.981400    7975 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:01:07.981819    7975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:07.981823    7975 out.go:304] Setting ErrFile to fd 2...
	I0327 11:01:07.981826    7975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:07.981986    7975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:01:07.982230    7975 mustload.go:65] Loading cluster: ha-637000
	I0327 11:01:07.982411    7975 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:01:07.987037    7975 out.go:177] * The control-plane node ha-637000 host is not running: state=Stopped
	I0327 11:01:07.989940    7975 out.go:177]   To start a cluster, run: "minikube start -p ha-637000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-637000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (32.137875ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:01:08.023515    7977 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:01:08.023906    7977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:08.023911    7977 out.go:304] Setting ErrFile to fd 2...
	I0327 11:01:08.023913    7977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:08.024114    7977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:01:08.024275    7977 out.go:298] Setting JSON to false
	I0327 11:01:08.024288    7977 mustload.go:65] Loading cluster: ha-637000
	I0327 11:01:08.024437    7977 notify.go:220] Checking for updates...
	I0327 11:01:08.024807    7977 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:01:08.024819    7977 status.go:255] checking status of ha-637000 ...
	I0327 11:01:08.025003    7977 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:01:08.025008    7977 status.go:343] host is not running, skipping remaining checks
	I0327 11:01:08.025010    7977 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (32.071875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-637000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-637000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-637000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":
null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-637000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Con
trolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\
",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (31.973583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-637000 stop -v=7 --alsologtostderr: (2.078254708s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr: exit status 7 (73.291833ms)

                                                
                                                
-- stdout --
	ha-637000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:01:10.311565    8000 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:01:10.311760    8000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:10.311764    8000 out.go:304] Setting ErrFile to fd 2...
	I0327 11:01:10.311767    8000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:10.311943    8000 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:01:10.312098    8000 out.go:298] Setting JSON to false
	I0327 11:01:10.312114    8000 mustload.go:65] Loading cluster: ha-637000
	I0327 11:01:10.312145    8000 notify.go:220] Checking for updates...
	I0327 11:01:10.312373    8000 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:01:10.312381    8000 status.go:255] checking status of ha-637000 ...
	I0327 11:01:10.312654    8000 status.go:330] ha-637000 host status = "Stopped" (err=<nil>)
	I0327 11:01:10.312659    8000 status.go:343] host is not running, skipping remaining checks
	I0327 11:01:10.312662    8000 status.go:257] ha-637000 status: &{Name:ha-637000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr": ha-637000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr": ha-637000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-637000 status -v=7 --alsologtostderr": ha-637000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (34.2415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-637000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-637000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.201439458s)

                                                
                                                
-- stdout --
	* [ha-637000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-637000" primary control-plane node in "ha-637000" cluster
	* Restarting existing qemu2 VM for "ha-637000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-637000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:01:10.377932    8004 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:01:10.378060    8004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:10.378063    8004 out.go:304] Setting ErrFile to fd 2...
	I0327 11:01:10.378065    8004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:10.378195    8004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:01:10.379214    8004 out.go:298] Setting JSON to false
	I0327 11:01:10.395568    8004 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5443,"bootTime":1711557027,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:01:10.395634    8004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:01:10.400875    8004 out.go:177] * [ha-637000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:01:10.408793    8004 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:01:10.412750    8004 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:01:10.408829    8004 notify.go:220] Checking for updates...
	I0327 11:01:10.419803    8004 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:01:10.422776    8004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:01:10.425757    8004 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:01:10.428800    8004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:01:10.432052    8004 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:01:10.432305    8004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:01:10.436744    8004 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:01:10.442683    8004 start.go:297] selected driver: qemu2
	I0327 11:01:10.442689    8004 start.go:901] validating driver "qemu2" against &{Name:ha-637000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-637000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:01:10.442759    8004 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:01:10.445016    8004 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:01:10.445064    8004 cni.go:84] Creating CNI manager for ""
	I0327 11:01:10.445068    8004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 11:01:10.445111    8004 start.go:340] cluster config:
	{Name:ha-637000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-637000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:01:10.449399    8004 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:01:10.456761    8004 out.go:177] * Starting "ha-637000" primary control-plane node in "ha-637000" cluster
	I0327 11:01:10.460755    8004 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:01:10.460771    8004 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:01:10.460784    8004 cache.go:56] Caching tarball of preloaded images
	I0327 11:01:10.460844    8004 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:01:10.460850    8004 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:01:10.460917    8004 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/ha-637000/config.json ...
	I0327 11:01:10.461377    8004 start.go:360] acquireMachinesLock for ha-637000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:01:10.461403    8004 start.go:364] duration metric: took 19.958µs to acquireMachinesLock for "ha-637000"
	I0327 11:01:10.461412    8004 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:01:10.461417    8004 fix.go:54] fixHost starting: 
	I0327 11:01:10.461534    8004 fix.go:112] recreateIfNeeded on ha-637000: state=Stopped err=<nil>
	W0327 11:01:10.461542    8004 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:01:10.469795    8004 out.go:177] * Restarting existing qemu2 VM for "ha-637000" ...
	I0327 11:01:10.473766    8004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4b:5e:19:23:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 11:01:10.475779    8004 main.go:141] libmachine: STDOUT: 
	I0327 11:01:10.475796    8004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:01:10.475825    8004 fix.go:56] duration metric: took 14.407625ms for fixHost
	I0327 11:01:10.475829    8004 start.go:83] releasing machines lock for "ha-637000", held for 14.422083ms
	W0327 11:01:10.475836    8004 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:01:10.475864    8004 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:01:10.475869    8004 start.go:728] Will try again in 5 seconds ...
	I0327 11:01:15.478008    8004 start.go:360] acquireMachinesLock for ha-637000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:01:15.478436    8004 start.go:364] duration metric: took 330.084µs to acquireMachinesLock for "ha-637000"
	I0327 11:01:15.478600    8004 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:01:15.478623    8004 fix.go:54] fixHost starting: 
	I0327 11:01:15.479270    8004 fix.go:112] recreateIfNeeded on ha-637000: state=Stopped err=<nil>
	W0327 11:01:15.479297    8004 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:01:15.487716    8004 out.go:177] * Restarting existing qemu2 VM for "ha-637000" ...
	I0327 11:01:15.492848    8004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4b:5e:19:23:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/ha-637000/disk.qcow2
	I0327 11:01:15.502633    8004 main.go:141] libmachine: STDOUT: 
	I0327 11:01:15.502704    8004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:01:15.502791    8004 fix.go:56] duration metric: took 24.170167ms for fixHost
	I0327 11:01:15.502812    8004 start.go:83] releasing machines lock for "ha-637000", held for 24.3485ms
	W0327 11:01:15.503108    8004 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-637000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-637000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:01:15.509725    8004 out.go:177] 
	W0327 11:01:15.513741    8004 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:01:15.513764    8004 out.go:239] * 
	* 
	W0327 11:01:15.516468    8004 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:01:15.535686    8004 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-637000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (70.176708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-637000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-637000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-637000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":
null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-637000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Con
trolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\
",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (31.644833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-637000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-637000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.831708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-637000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-637000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:01:15.757446    8024 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:01:15.757569    8024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:15.757572    8024 out.go:304] Setting ErrFile to fd 2...
	I0327 11:01:15.757575    8024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:01:15.757712    8024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:01:15.757953    8024 mustload.go:65] Loading cluster: ha-637000
	I0327 11:01:15.758134    8024 config.go:182] Loaded profile config "ha-637000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:01:15.762865    8024 out.go:177] * The control-plane node ha-637000 host is not running: state=Stopped
	I0327 11:01:15.766730    8024 out.go:177]   To start a cluster, run: "minikube start -p ha-637000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-637000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (32.325791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-637000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-637000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-637000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDr
iverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-637000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true
,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInt
erval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-637000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-637000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-637000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":nul
l,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-637000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"Contro
lPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\
"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-637000 -n ha-637000: exit status 7 (32.236792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-030000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-030000 --driver=qemu2 : exit status 80 (9.884166041s)

                                                
                                                
-- stdout --
	* [image-030000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-030000" primary control-plane node in "image-030000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-030000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-030000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-030000 -n image-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-030000 -n image-030000: exit status 7 (73.845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-869000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-869000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.8834175s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4dc4ec56-c831-4bcd-acf4-41aa2f6bc577","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-869000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6ce3bb4-a732-463d-8985-ad9731b48ccd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18517"}}
	{"specversion":"1.0","id":"27c6651d-6f63-4395-8604-7c9db101ae41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig"}}
	{"specversion":"1.0","id":"baebb253-acfd-43e5-a032-3fe39d904e0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"38f49eb7-56dc-41b2-b7b1-37956a27dee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bc634ef5-92c0-47cf-be38-4c7aceda8be3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube"}}
	{"specversion":"1.0","id":"06f3e79e-acb7-416d-be33-4d2d19f1c6a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"52647b4f-3236-41e5-b1de-bf69e0c096b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a92728a1-62d7-46d6-b13a-0ea00b7c03d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"56cb192b-f4cf-46c9-9c5f-d46b79cbf1c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-869000\" primary control-plane node in \"json-output-869000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e37a11e-39a4-4fa2-9345-36e32d073630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"df2b110b-e17f-41f9-84a4-f6b5140070a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-869000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"c18c4fa2-8157-4928-90a0-5e10f322c8d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"3daf81db-e073-4015-8965-0950a244ecaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"4e0f5404-bf90-4c14-a07b-3bb7d9383ddb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-869000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"28d7b7a2-9f09-4e45-95db-1d528c90d8e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"bb3ef74b-eb63-4de9-ab1b-70e28cb84460","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-869000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.88s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-869000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-869000 --output=json --user=testUser: exit status 83 (82.332625ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"90f74b54-c47d-48d3-83bd-fffc214a1207","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-869000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"6ad26759-0b25-40f8-901b-126bdd5cd887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-869000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-869000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-869000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-869000 --output=json --user=testUser: exit status 83 (47.344834ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-869000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-869000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-869000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-869000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-733000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-733000 --driver=qemu2 : exit status 80 (9.845933125s)

                                                
                                                
-- stdout --
	* [first-733000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-733000" primary control-plane node in "first-733000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-733000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-733000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-733000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-27 11:01:49.833335 -0700 PDT m=+625.639593668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-734000 -n second-734000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-734000 -n second-734000: exit status 85 (78.395375ms)

                                                
                                                
-- stdout --
	* Profile "second-734000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-734000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-734000" host is not running, skipping log retrieval (state="* Profile \"second-734000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-734000\"")
helpers_test.go:175: Cleaning up "second-734000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-734000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-27 11:01:50.15118 -0700 PDT m=+625.957440543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-733000 -n first-733000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-733000 -n first-733000: exit status 7 (32.204708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-733000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-733000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-733000
--- FAIL: TestMinikubeProfile (10.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-299000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-299000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.575576958s)

                                                
                                                
-- stdout --
	* [mount-start-1-299000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-299000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-299000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-299000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-299000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-299000 -n mount-start-1-299000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-299000 -n mount-start-1-299000: exit status 7 (69.650875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-299000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.65s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-906000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-906000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.959627542s)

                                                
                                                
-- stdout --
	* [multinode-906000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-906000" primary control-plane node in "multinode-906000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-906000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:02:01.303129    8221 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:02:01.303251    8221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:02:01.303254    8221 out.go:304] Setting ErrFile to fd 2...
	I0327 11:02:01.303257    8221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:02:01.303394    8221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:02:01.304452    8221 out.go:298] Setting JSON to false
	I0327 11:02:01.320748    8221 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5494,"bootTime":1711557027,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:02:01.320803    8221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:02:01.326910    8221 out.go:177] * [multinode-906000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:02:01.334816    8221 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:02:01.338835    8221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:02:01.334875    8221 notify.go:220] Checking for updates...
	I0327 11:02:01.344780    8221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:02:01.347807    8221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:02:01.349362    8221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:02:01.352798    8221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:02:01.356019    8221 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:02:01.360664    8221 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:02:01.367792    8221 start.go:297] selected driver: qemu2
	I0327 11:02:01.367798    8221 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:02:01.367803    8221 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:02:01.370102    8221 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:02:01.372792    8221 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:02:01.375918    8221 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:02:01.375972    8221 cni.go:84] Creating CNI manager for ""
	I0327 11:02:01.375978    8221 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0327 11:02:01.375983    8221 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 11:02:01.376017    8221 start.go:340] cluster config:
	{Name:multinode-906000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Socket
VMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:02:01.380442    8221 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:02:01.388716    8221 out.go:177] * Starting "multinode-906000" primary control-plane node in "multinode-906000" cluster
	I0327 11:02:01.392784    8221 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:02:01.392805    8221 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:02:01.392818    8221 cache.go:56] Caching tarball of preloaded images
	I0327 11:02:01.392889    8221 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:02:01.392896    8221 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:02:01.393132    8221 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/multinode-906000/config.json ...
	I0327 11:02:01.393144    8221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/multinode-906000/config.json: {Name:mk2f1ed4810d4df774e85e720b5d7c5b2a50a788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:02:01.393372    8221 start.go:360] acquireMachinesLock for multinode-906000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:02:01.393406    8221 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "multinode-906000"
	I0327 11:02:01.393419    8221 start.go:93] Provisioning new machine with config: &{Name:multinode-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:mul
tinode-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:02:01.393459    8221 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:02:01.397808    8221 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:02:01.415085    8221 start.go:159] libmachine.API.Create for "multinode-906000" (driver="qemu2")
	I0327 11:02:01.415105    8221 client.go:168] LocalClient.Create starting
	I0327 11:02:01.415166    8221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:02:01.415195    8221 main.go:141] libmachine: Decoding PEM data...
	I0327 11:02:01.415204    8221 main.go:141] libmachine: Parsing certificate...
	I0327 11:02:01.415251    8221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:02:01.415273    8221 main.go:141] libmachine: Decoding PEM data...
	I0327 11:02:01.415282    8221 main.go:141] libmachine: Parsing certificate...
	I0327 11:02:01.415623    8221 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:02:01.559193    8221 main.go:141] libmachine: Creating SSH key...
	I0327 11:02:01.796469    8221 main.go:141] libmachine: Creating Disk image...
	I0327 11:02:01.796478    8221 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:02:01.796675    8221 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:02:01.809530    8221 main.go:141] libmachine: STDOUT: 
	I0327 11:02:01.809555    8221 main.go:141] libmachine: STDERR: 
	I0327 11:02:01.809610    8221 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2 +20000M
	I0327 11:02:01.820462    8221 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:02:01.820490    8221 main.go:141] libmachine: STDERR: 
	I0327 11:02:01.820510    8221 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:02:01.820514    8221 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:02:01.820546    8221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:9b:c7:ba:d9:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:02:01.822300    8221 main.go:141] libmachine: STDOUT: 
	I0327 11:02:01.822316    8221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:02:01.822336    8221 client.go:171] duration metric: took 407.229042ms to LocalClient.Create
	I0327 11:02:03.823223    8221 start.go:128] duration metric: took 2.42973475s to createHost
	I0327 11:02:03.823314    8221 start.go:83] releasing machines lock for "multinode-906000", held for 2.429914208s
	W0327 11:02:03.823371    8221 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:02:03.834659    8221 out.go:177] * Deleting "multinode-906000" in qemu2 ...
	W0327 11:02:03.872087    8221 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:02:03.872119    8221 start.go:728] Will try again in 5 seconds ...
	I0327 11:02:08.873367    8221 start.go:360] acquireMachinesLock for multinode-906000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:02:08.873818    8221 start.go:364] duration metric: took 337.541µs to acquireMachinesLock for "multinode-906000"
	I0327 11:02:08.873976    8221 start.go:93] Provisioning new machine with config: &{Name:multinode-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:mul
tinode-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:02:08.874251    8221 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:02:08.885981    8221 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:02:08.935958    8221 start.go:159] libmachine.API.Create for "multinode-906000" (driver="qemu2")
	I0327 11:02:08.936016    8221 client.go:168] LocalClient.Create starting
	I0327 11:02:08.936134    8221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:02:08.936205    8221 main.go:141] libmachine: Decoding PEM data...
	I0327 11:02:08.936218    8221 main.go:141] libmachine: Parsing certificate...
	I0327 11:02:08.936281    8221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:02:08.936322    8221 main.go:141] libmachine: Decoding PEM data...
	I0327 11:02:08.936332    8221 main.go:141] libmachine: Parsing certificate...
	I0327 11:02:08.936871    8221 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:02:09.090917    8221 main.go:141] libmachine: Creating SSH key...
	I0327 11:02:09.158586    8221 main.go:141] libmachine: Creating Disk image...
	I0327 11:02:09.158595    8221 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:02:09.158768    8221 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:02:09.171136    8221 main.go:141] libmachine: STDOUT: 
	I0327 11:02:09.171157    8221 main.go:141] libmachine: STDERR: 
	I0327 11:02:09.171204    8221 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2 +20000M
	I0327 11:02:09.181996    8221 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:02:09.182016    8221 main.go:141] libmachine: STDERR: 
	I0327 11:02:09.182037    8221 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:02:09.182044    8221 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:02:09.182079    8221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c0:e7:4b:81:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:02:09.183917    8221 main.go:141] libmachine: STDOUT: 
	I0327 11:02:09.183933    8221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:02:09.183942    8221 client.go:171] duration metric: took 247.918625ms to LocalClient.Create
	I0327 11:02:11.186099    8221 start.go:128] duration metric: took 2.311834292s to createHost
	I0327 11:02:11.186179    8221 start.go:83] releasing machines lock for "multinode-906000", held for 2.312348208s
	W0327 11:02:11.186532    8221 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:02:11.202258    8221 out.go:177] 
	W0327 11:02:11.207313    8221 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:02:11.207346    8221 out.go:239] * 
	* 
	W0327 11:02:11.209883    8221 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:02:11.218086    8221 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-906000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (69.70475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (118.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.462667ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-906000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- rollout status deployment/busybox: exit status 1 (58.897792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.3655ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.754666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.129292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.083834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.211667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.316458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.452625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.779417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.338458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.691542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.640792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.326541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.652834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.666541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.108292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (32.053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (118.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-906000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.427791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (32.345583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-906000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-906000 -v 3 --alsologtostderr: exit status 83 (44.548042ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-906000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:10.237720    8388 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:10.237892    8388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.237895    8388 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:10.237897    8388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.238037    8388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:10.238293    8388 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:10.238483    8388 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:10.242938    8388 out.go:177] * The control-plane node multinode-906000 host is not running: state=Stopped
	I0327 11:04:10.246909    8388 out.go:177]   To start a cluster, run: "minikube start -p multinode-906000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-906000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (31.868542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-906000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-906000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.84525ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-906000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-906000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-906000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (32.656209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-906000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"Doc
kerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":
\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\
":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (32.16275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status --output json --alsologtostderr: exit status 7 (32.127375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-906000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:10.480624    8401 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:10.480768    8401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.480772    8401 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:10.480774    8401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.480902    8401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:10.481021    8401 out.go:298] Setting JSON to true
	I0327 11:04:10.481033    8401 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:10.481100    8401 notify.go:220] Checking for updates...
	I0327 11:04:10.481243    8401 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:10.481249    8401 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:10.481450    8401 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:10.481453    8401 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:10.481455    8401 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-906000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (32.375667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 node stop m03: exit status 85 (48.328875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-906000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status: exit status 7 (32.325709ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr: exit status 7 (32.828917ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:10.627170    8409 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:10.627323    8409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.627326    8409 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:10.627328    8409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.627456    8409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:10.627584    8409 out.go:298] Setting JSON to false
	I0327 11:04:10.627595    8409 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:10.627641    8409 notify.go:220] Checking for updates...
	I0327 11:04:10.627833    8409 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:10.627841    8409 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:10.628067    8409 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:10.628071    8409 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:10.628073    8409 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr": multinode-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (32.429458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (57.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 node start m03 -v=7 --alsologtostderr: exit status 85 (50.900125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:10.692093    8413 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:10.692508    8413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.692512    8413 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:10.692514    8413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.692656    8413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:10.692892    8413 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:10.693070    8413 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:10.697743    8413 out.go:177] 
	W0327 11:04:10.701756    8413 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0327 11:04:10.701761    8413 out.go:239] * 
	* 
	W0327 11:04:10.703714    8413 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:04:10.708744    8413 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0327 11:04:10.692093    8413 out.go:291] Setting OutFile to fd 1 ...
I0327 11:04:10.692508    8413 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 11:04:10.692512    8413 out.go:304] Setting ErrFile to fd 2...
I0327 11:04:10.692514    8413 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 11:04:10.692656    8413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
I0327 11:04:10.692892    8413 mustload.go:65] Loading cluster: multinode-906000
I0327 11:04:10.693070    8413 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 11:04:10.697743    8413 out.go:177] 
W0327 11:04:10.701756    8413 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0327 11:04:10.701761    8413 out.go:239] * 
* 
W0327 11:04:10.703714    8413 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 11:04:10.708744    8413 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-906000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr: exit status 7 (32.469042ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:10.743562    8415 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:10.743706    8415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.743710    8415 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:10.743712    8415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:10.743832    8415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:10.743955    8415 out.go:298] Setting JSON to false
	I0327 11:04:10.743969    8415 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:10.744032    8415 notify.go:220] Checking for updates...
	I0327 11:04:10.744175    8415 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:10.744180    8415 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:10.744398    8415 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:10.744402    8415 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:10.744404    8415 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr: exit status 7 (79.4065ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:12.024720    8417 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:12.024902    8417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:12.024907    8417 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:12.024910    8417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:12.025061    8417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:12.025243    8417 out.go:298] Setting JSON to false
	I0327 11:04:12.025261    8417 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:12.025299    8417 notify.go:220] Checking for updates...
	I0327 11:04:12.025509    8417 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:12.025517    8417 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:12.025809    8417 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:12.025814    8417 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:12.025817    8417 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr: exit status 7 (75.886875ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:13.673821    8421 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:13.673994    8421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:13.673998    8421 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:13.674001    8421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:13.674162    8421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:13.674305    8421 out.go:298] Setting JSON to false
	I0327 11:04:13.674323    8421 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:13.674355    8421 notify.go:220] Checking for updates...
	I0327 11:04:13.674569    8421 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:13.674576    8421 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:13.674876    8421 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:13.674881    8421 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:13.674884    8421 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr: exit status 7 (75.374334ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:15.967612    8427 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:15.967783    8427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:15.967787    8427 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:15.967790    8427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:15.967946    8427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:15.968100    8427 out.go:298] Setting JSON to false
	I0327 11:04:15.968116    8427 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:15.968146    8427 notify.go:220] Checking for updates...
	I0327 11:04:15.968355    8427 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:15.968362    8427 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:15.968608    8427 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:15.968612    8427 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:15.968615    8427 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr: exit status 7 (75.586792ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:19.253004    8429 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:19.253153    8429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:19.253157    8429 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:19.253160    8429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:19.253307    8429 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:19.253488    8429 out.go:298] Setting JSON to false
	I0327 11:04:19.253502    8429 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:19.253535    8429 notify.go:220] Checking for updates...
	I0327 11:04:19.253721    8429 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:19.253727    8429 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:19.253988    8429 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:19.253993    8429 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:19.253995    8429 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr: exit status 7 (78.625834ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:22.835526    8431 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:22.835716    8431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:22.835721    8431 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:22.835724    8431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:22.835905    8431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:22.836064    8431 out.go:298] Setting JSON to false
	I0327 11:04:22.836081    8431 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:22.836106    8431 notify.go:220] Checking for updates...
	I0327 11:04:22.836316    8431 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:22.836323    8431 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:22.836591    8431 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:22.836596    8431 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:22.836599    8431 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr: exit status 7 (78.884709ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:29.489781    8439 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:29.489997    8439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:29.490001    8439 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:29.490004    8439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:29.490181    8439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:29.490319    8439 out.go:298] Setting JSON to false
	I0327 11:04:29.490344    8439 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:29.490381    8439 notify.go:220] Checking for updates...
	I0327 11:04:29.490607    8439 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:29.490614    8439 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:29.490903    8439 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:29.490908    8439 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:29.490910    8439 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr: exit status 7 (76.612791ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:04:46.638282    8448 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:04:46.638469    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:46.638473    8448 out.go:304] Setting ErrFile to fd 2...
	I0327 11:04:46.638476    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:04:46.638649    8448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:04:46.638824    8448 out.go:298] Setting JSON to false
	I0327 11:04:46.638839    8448 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:04:46.638884    8448 notify.go:220] Checking for updates...
	I0327 11:04:46.639091    8448 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:04:46.639099    8448 status.go:255] checking status of multinode-906000 ...
	I0327 11:04:46.639372    8448 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:04:46.639377    8448 status.go:343] host is not running, skipping remaining checks
	I0327 11:04:46.639380    8448 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr: exit status 7 (75.359833ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:05:08.536505    8473 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:05:08.536710    8473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:08.536714    8473 out.go:304] Setting ErrFile to fd 2...
	I0327 11:05:08.536717    8473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:08.536896    8473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:05:08.537062    8473 out.go:298] Setting JSON to false
	I0327 11:05:08.537078    8473 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:05:08.537104    8473 notify.go:220] Checking for updates...
	I0327 11:05:08.537350    8473 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:05:08.537358    8473 status.go:255] checking status of multinode-906000 ...
	I0327 11:05:08.537617    8473 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:05:08.537622    8473 status.go:343] host is not running, skipping remaining checks
	I0327 11:05:08.537625    8473 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-906000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (34.333959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (57.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-906000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-906000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-906000: (3.998204666s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-906000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-906000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.224125042s)

                                                
                                                
-- stdout --
	* [multinode-906000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-906000" primary control-plane node in "multinode-906000" cluster
	* Restarting existing qemu2 VM for "multinode-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:05:12.668430    8504 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:05:12.668574    8504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:12.668578    8504 out.go:304] Setting ErrFile to fd 2...
	I0327 11:05:12.668581    8504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:12.668748    8504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:05:12.669955    8504 out.go:298] Setting JSON to false
	I0327 11:05:12.689757    8504 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5685,"bootTime":1711557027,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:05:12.689820    8504 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:05:12.694191    8504 out.go:177] * [multinode-906000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:05:12.702944    8504 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:05:12.702989    8504 notify.go:220] Checking for updates...
	I0327 11:05:12.706953    8504 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:05:12.709933    8504 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:05:12.712934    8504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:05:12.716946    8504 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:05:12.719906    8504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:05:12.723253    8504 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:05:12.723311    8504 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:05:12.727965    8504 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:05:12.735032    8504 start.go:297] selected driver: qemu2
	I0327 11:05:12.735039    8504 start.go:901] validating driver "qemu2" against &{Name:multinode-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multin
ode-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:05:12.735122    8504 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:05:12.737557    8504 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:05:12.737605    8504 cni.go:84] Creating CNI manager for ""
	I0327 11:05:12.737611    8504 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 11:05:12.737654    8504 start.go:340] cluster config:
	{Name:multinode-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:05:12.742227    8504 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:12.748916    8504 out.go:177] * Starting "multinode-906000" primary control-plane node in "multinode-906000" cluster
	I0327 11:05:12.752889    8504 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:05:12.752909    8504 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:05:12.752918    8504 cache.go:56] Caching tarball of preloaded images
	I0327 11:05:12.752982    8504 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:05:12.752988    8504 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:05:12.753061    8504 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/multinode-906000/config.json ...
	I0327 11:05:12.753553    8504 start.go:360] acquireMachinesLock for multinode-906000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:05:12.753595    8504 start.go:364] duration metric: took 33.084µs to acquireMachinesLock for "multinode-906000"
	I0327 11:05:12.753606    8504 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:05:12.753615    8504 fix.go:54] fixHost starting: 
	I0327 11:05:12.753746    8504 fix.go:112] recreateIfNeeded on multinode-906000: state=Stopped err=<nil>
	W0327 11:05:12.753755    8504 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:05:12.761917    8504 out.go:177] * Restarting existing qemu2 VM for "multinode-906000" ...
	I0327 11:05:12.764860    8504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c0:e7:4b:81:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:05:12.767009    8504 main.go:141] libmachine: STDOUT: 
	I0327 11:05:12.767035    8504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:05:12.767066    8504 fix.go:56] duration metric: took 13.45125ms for fixHost
	I0327 11:05:12.767072    8504 start.go:83] releasing machines lock for "multinode-906000", held for 13.472167ms
	W0327 11:05:12.767080    8504 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:05:12.767117    8504 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:05:12.767122    8504 start.go:728] Will try again in 5 seconds ...
	I0327 11:05:17.768806    8504 start.go:360] acquireMachinesLock for multinode-906000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:05:17.769157    8504 start.go:364] duration metric: took 269.417µs to acquireMachinesLock for "multinode-906000"
	I0327 11:05:17.769295    8504 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:05:17.769317    8504 fix.go:54] fixHost starting: 
	I0327 11:05:17.770008    8504 fix.go:112] recreateIfNeeded on multinode-906000: state=Stopped err=<nil>
	W0327 11:05:17.770034    8504 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:05:17.775468    8504 out.go:177] * Restarting existing qemu2 VM for "multinode-906000" ...
	I0327 11:05:17.780611    8504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c0:e7:4b:81:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:05:17.790539    8504 main.go:141] libmachine: STDOUT: 
	I0327 11:05:17.790626    8504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:05:17.790711    8504 fix.go:56] duration metric: took 21.400625ms for fixHost
	I0327 11:05:17.790734    8504 start.go:83] releasing machines lock for "multinode-906000", held for 21.551958ms
	W0327 11:05:17.790939    8504 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:05:17.798417    8504 out.go:177] 
	W0327 11:05:17.802623    8504 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:05:17.802669    8504 out.go:239] * 
	* 
	W0327 11:05:17.805365    8504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:05:17.815426    8504 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-906000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-906000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (34.741875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 node delete m03: exit status 83 (41.132083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-906000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-906000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr: exit status 7 (31.926084ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:05:18.003174    8524 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:05:18.003325    8524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:18.003328    8524 out.go:304] Setting ErrFile to fd 2...
	I0327 11:05:18.003330    8524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:18.003459    8524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:05:18.003589    8524 out.go:298] Setting JSON to false
	I0327 11:05:18.003607    8524 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:05:18.003652    8524 notify.go:220] Checking for updates...
	I0327 11:05:18.003822    8524 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:05:18.003832    8524 status.go:255] checking status of multinode-906000 ...
	I0327 11:05:18.004035    8524 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:05:18.004039    8524 status.go:343] host is not running, skipping remaining checks
	I0327 11:05:18.004041    8524 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (31.852708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-906000 stop: (3.578129333s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status: exit status 7 (69.152709ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr: exit status 7 (34.392333ms)

                                                
                                                
-- stdout --
	multinode-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:05:21.717395    8549 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:05:21.717552    8549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:21.717555    8549 out.go:304] Setting ErrFile to fd 2...
	I0327 11:05:21.717558    8549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:21.717693    8549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:05:21.717823    8549 out.go:298] Setting JSON to false
	I0327 11:05:21.717841    8549 mustload.go:65] Loading cluster: multinode-906000
	I0327 11:05:21.717898    8549 notify.go:220] Checking for updates...
	I0327 11:05:21.718016    8549 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:05:21.718023    8549 status.go:255] checking status of multinode-906000 ...
	I0327 11:05:21.718216    8549 status.go:330] multinode-906000 host status = "Stopped" (err=<nil>)
	I0327 11:05:21.718220    8549 status.go:343] host is not running, skipping remaining checks
	I0327 11:05:21.718222    8549 status.go:257] multinode-906000 status: &{Name:multinode-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr": multinode-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-906000 status --alsologtostderr": multinode-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (32.161875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-906000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-906000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.188574208s)

                                                
                                                
-- stdout --
	* [multinode-906000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-906000" primary control-plane node in "multinode-906000" cluster
	* Restarting existing qemu2 VM for "multinode-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:05:21.781400    8553 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:05:21.781522    8553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:21.781525    8553 out.go:304] Setting ErrFile to fd 2...
	I0327 11:05:21.781527    8553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:21.781673    8553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:05:21.782677    8553 out.go:298] Setting JSON to false
	I0327 11:05:21.798921    8553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5694,"bootTime":1711557027,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:05:21.798981    8553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:05:21.803775    8553 out.go:177] * [multinode-906000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:05:21.812578    8553 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:05:21.816615    8553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:05:21.812611    8553 notify.go:220] Checking for updates...
	I0327 11:05:21.822563    8553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:05:21.825592    8553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:05:21.827051    8553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:05:21.830568    8553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:05:21.833857    8553 config.go:182] Loaded profile config "multinode-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:05:21.834116    8553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:05:21.838451    8553 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:05:21.845558    8553 start.go:297] selected driver: qemu2
	I0327 11:05:21.845569    8553 start.go:901] validating driver "qemu2" against &{Name:multinode-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multin
ode-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:05:21.845631    8553 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:05:21.847898    8553 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:05:21.847936    8553 cni.go:84] Creating CNI manager for ""
	I0327 11:05:21.847940    8553 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 11:05:21.847984    8553 start.go:340] cluster config:
	{Name:multinode-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:05:21.852301    8553 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:21.859567    8553 out.go:177] * Starting "multinode-906000" primary control-plane node in "multinode-906000" cluster
	I0327 11:05:21.863572    8553 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:05:21.863588    8553 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:05:21.863596    8553 cache.go:56] Caching tarball of preloaded images
	I0327 11:05:21.863635    8553 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:05:21.863641    8553 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:05:21.863693    8553 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/multinode-906000/config.json ...
	I0327 11:05:21.864160    8553 start.go:360] acquireMachinesLock for multinode-906000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:05:21.864185    8553 start.go:364] duration metric: took 19.667µs to acquireMachinesLock for "multinode-906000"
	I0327 11:05:21.864194    8553 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:05:21.864198    8553 fix.go:54] fixHost starting: 
	I0327 11:05:21.864313    8553 fix.go:112] recreateIfNeeded on multinode-906000: state=Stopped err=<nil>
	W0327 11:05:21.864322    8553 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:05:21.872611    8553 out.go:177] * Restarting existing qemu2 VM for "multinode-906000" ...
	I0327 11:05:21.876582    8553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c0:e7:4b:81:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:05:21.878621    8553 main.go:141] libmachine: STDOUT: 
	I0327 11:05:21.878644    8553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:05:21.878675    8553 fix.go:56] duration metric: took 14.476ms for fixHost
	I0327 11:05:21.878679    8553 start.go:83] releasing machines lock for "multinode-906000", held for 14.49ms
	W0327 11:05:21.878687    8553 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:05:21.878714    8553 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:05:21.878719    8553 start.go:728] Will try again in 5 seconds ...
	I0327 11:05:26.878998    8553 start.go:360] acquireMachinesLock for multinode-906000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:05:26.879396    8553 start.go:364] duration metric: took 269.458µs to acquireMachinesLock for "multinode-906000"
	I0327 11:05:26.879522    8553 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:05:26.879546    8553 fix.go:54] fixHost starting: 
	I0327 11:05:26.880273    8553 fix.go:112] recreateIfNeeded on multinode-906000: state=Stopped err=<nil>
	W0327 11:05:26.880297    8553 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:05:26.888668    8553 out.go:177] * Restarting existing qemu2 VM for "multinode-906000" ...
	I0327 11:05:26.892852    8553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c0:e7:4b:81:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/multinode-906000/disk.qcow2
	I0327 11:05:26.902606    8553 main.go:141] libmachine: STDOUT: 
	I0327 11:05:26.902696    8553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:05:26.902781    8553 fix.go:56] duration metric: took 23.24025ms for fixHost
	I0327 11:05:26.902801    8553 start.go:83] releasing machines lock for "multinode-906000", held for 23.381542ms
	W0327 11:05:26.902996    8553 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:05:26.909650    8553 out.go:177] 
	W0327 11:05:26.913783    8553 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:05:26.913812    8553 out.go:239] * 
	* 
	W0327 11:05:26.916242    8553 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:05:26.925620    8553 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-906000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (68.44625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-906000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-906000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-906000-m01 --driver=qemu2 : exit status 80 (9.928725417s)

                                                
                                                
-- stdout --
	* [multinode-906000-m01] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-906000-m01" primary control-plane node in "multinode-906000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-906000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-906000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-906000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-906000-m02 --driver=qemu2 : exit status 80 (9.926397583s)

                                                
                                                
-- stdout --
	* [multinode-906000-m02] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-906000-m02" primary control-plane node in "multinode-906000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-906000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-906000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-906000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-906000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-906000: exit status 83 (82.23275ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-906000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-906000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-906000 -n multinode-906000: exit status 7 (32.706833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                    
x
+
TestPreload (10.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-414000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-414000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.845485917s)

                                                
                                                
-- stdout --
	* [test-preload-414000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-414000" primary control-plane node in "test-preload-414000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-414000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:05:47.291995    8633 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:05:47.292129    8633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:47.292133    8633 out.go:304] Setting ErrFile to fd 2...
	I0327 11:05:47.292135    8633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:05:47.292254    8633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:05:47.293300    8633 out.go:298] Setting JSON to false
	I0327 11:05:47.309650    8633 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5720,"bootTime":1711557027,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:05:47.309716    8633 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:05:47.316411    8633 out.go:177] * [test-preload-414000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:05:47.323412    8633 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:05:47.327420    8633 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:05:47.323471    8633 notify.go:220] Checking for updates...
	I0327 11:05:47.334374    8633 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:05:47.338333    8633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:05:47.341322    8633 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:05:47.344371    8633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:05:47.347791    8633 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:05:47.347848    8633 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:05:47.351336    8633 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:05:47.358378    8633 start.go:297] selected driver: qemu2
	I0327 11:05:47.358384    8633 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:05:47.358390    8633 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:05:47.360684    8633 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:05:47.365307    8633 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:05:47.368488    8633 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:05:47.368540    8633 cni.go:84] Creating CNI manager for ""
	I0327 11:05:47.368553    8633 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:05:47.368558    8633 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:05:47.368610    8633 start.go:340] cluster config:
	{Name:test-preload-414000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client
SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:05:47.373404    8633 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:47.381384    8633 out.go:177] * Starting "test-preload-414000" primary control-plane node in "test-preload-414000" cluster
	I0327 11:05:47.385347    8633 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0327 11:05:47.385423    8633 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/test-preload-414000/config.json ...
	I0327 11:05:47.385441    8633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/test-preload-414000/config.json: {Name:mk6c49022dadfc0ddbf2b8889974fd8fdf9795b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:05:47.385457    8633 cache.go:107] acquiring lock: {Name:mk0381760554e7607b6127c65d3e951c454618f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:47.385463    8633 cache.go:107] acquiring lock: {Name:mkb083152eeb5730ef185bc4ea04f251800ce66f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:47.385479    8633 cache.go:107] acquiring lock: {Name:mk65571fc2e7d00b08584f452fa506da98e38320 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:47.385690    8633 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:05:47.385673    8633 cache.go:107] acquiring lock: {Name:mk00ad29766c855b04366e530155d5715d600ba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:47.385716    8633 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0327 11:05:47.385722    8633 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0327 11:05:47.385746    8633 start.go:360] acquireMachinesLock for test-preload-414000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:05:47.385734    8633 cache.go:107] acquiring lock: {Name:mk792f071d08bc7918c0f8c4cf7d9a0aaf743396 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:47.385773    8633 cache.go:107] acquiring lock: {Name:mka2b43ad87a1f47e3220f3878ca8add4166e575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:47.385790    8633 start.go:364] duration metric: took 35.209µs to acquireMachinesLock for "test-preload-414000"
	I0327 11:05:47.385788    8633 cache.go:107] acquiring lock: {Name:mke6a16954c5dcf2a65fb8819fde8f436dd338dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:47.385814    8633 start.go:93] Provisioning new machine with config: &{Name:test-preload-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:
test-preload-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:05:47.385894    8633 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:05:47.385754    8633 cache.go:107] acquiring lock: {Name:mk2207ff0deafe1ac4b96dbe0a1ca9b2877ace87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:05:47.394197    8633 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:05:47.386087    8633 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 11:05:47.386089    8633 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0327 11:05:47.390697    8633 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:05:47.390697    8633 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0327 11:05:47.390732    8633 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:05:47.397346    8633 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:05:47.397354    8633 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0327 11:05:47.397374    8633 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0327 11:05:47.399787    8633 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:05:47.399897    8633 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0327 11:05:47.399907    8633 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 11:05:47.401020    8633 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:05:47.401059    8633 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0327 11:05:47.413078    8633 start.go:159] libmachine.API.Create for "test-preload-414000" (driver="qemu2")
	I0327 11:05:47.413099    8633 client.go:168] LocalClient.Create starting
	I0327 11:05:47.413187    8633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:05:47.413217    8633 main.go:141] libmachine: Decoding PEM data...
	I0327 11:05:47.413229    8633 main.go:141] libmachine: Parsing certificate...
	I0327 11:05:47.413271    8633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:05:47.413294    8633 main.go:141] libmachine: Decoding PEM data...
	I0327 11:05:47.413299    8633 main.go:141] libmachine: Parsing certificate...
	I0327 11:05:47.413692    8633 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:05:47.560555    8633 main.go:141] libmachine: Creating SSH key...
	I0327 11:05:47.605622    8633 main.go:141] libmachine: Creating Disk image...
	I0327 11:05:47.605637    8633 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:05:47.605818    8633 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2
	I0327 11:05:47.618826    8633 main.go:141] libmachine: STDOUT: 
	I0327 11:05:47.618970    8633 main.go:141] libmachine: STDERR: 
	I0327 11:05:47.619016    8633 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2 +20000M
	I0327 11:05:47.631248    8633 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:05:47.631270    8633 main.go:141] libmachine: STDERR: 
	I0327 11:05:47.631282    8633 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2
	I0327 11:05:47.631285    8633 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:05:47.631311    8633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ec:b8:f8:1b:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2
	I0327 11:05:47.633774    8633 main.go:141] libmachine: STDOUT: 
	I0327 11:05:47.633791    8633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:05:47.633816    8633 client.go:171] duration metric: took 220.71225ms to LocalClient.Create
	I0327 11:05:49.375740    8633 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0327 11:05:49.516422    8633 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0327 11:05:49.518235    8633 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0327 11:05:49.527967    8633 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 11:05:49.528054    8633 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 11:05:49.528414    8633 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 11:05:49.529946    8633 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0327 11:05:49.548458    8633 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0327 11:05:49.633991    8633 start.go:128] duration metric: took 2.248091042s to createHost
	I0327 11:05:49.634043    8633 start.go:83] releasing machines lock for "test-preload-414000", held for 2.248258667s
	W0327 11:05:49.634102    8633 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:05:49.650896    8633 out.go:177] * Deleting "test-preload-414000" in qemu2 ...
	I0327 11:05:49.658864    8633 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0327 11:05:49.658911    8633 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.273306917s
	I0327 11:05:49.658935    8633 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0327 11:05:49.682111    8633 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:05:49.682144    8633 start.go:728] Will try again in 5 seconds ...
	W0327 11:05:49.690925    8633 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 11:05:49.690993    8633 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 11:05:50.857247    8633 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0327 11:05:50.857296    8633 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.471556917s
	I0327 11:05:50.857320    8633 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0327 11:05:51.225685    8633 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0327 11:05:51.225749    8633 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.840088625s
	I0327 11:05:51.225781    8633 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0327 11:05:51.560918    8633 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0327 11:05:51.560962    8633 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.175530708s
	I0327 11:05:51.560985    8633 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0327 11:05:52.289975    8633 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0327 11:05:52.290023    8633 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.904602458s
	I0327 11:05:52.290067    8633 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0327 11:05:53.491202    8633 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0327 11:05:53.491256    8633 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.105841167s
	I0327 11:05:53.491290    8633 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0327 11:05:54.682516    8633 start.go:360] acquireMachinesLock for test-preload-414000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:05:54.682929    8633 start.go:364] duration metric: took 325.834µs to acquireMachinesLock for "test-preload-414000"
	I0327 11:05:54.683070    8633 start.go:93] Provisioning new machine with config: &{Name:test-preload-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:
test-preload-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:05:54.683290    8633 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:05:54.697050    8633 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:05:54.744879    8633 start.go:159] libmachine.API.Create for "test-preload-414000" (driver="qemu2")
	I0327 11:05:54.744932    8633 client.go:168] LocalClient.Create starting
	I0327 11:05:54.745036    8633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:05:54.745097    8633 main.go:141] libmachine: Decoding PEM data...
	I0327 11:05:54.745119    8633 main.go:141] libmachine: Parsing certificate...
	I0327 11:05:54.745181    8633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:05:54.745222    8633 main.go:141] libmachine: Decoding PEM data...
	I0327 11:05:54.745236    8633 main.go:141] libmachine: Parsing certificate...
	I0327 11:05:54.745796    8633 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:05:54.899783    8633 main.go:141] libmachine: Creating SSH key...
	I0327 11:05:55.032109    8633 main.go:141] libmachine: Creating Disk image...
	I0327 11:05:55.032116    8633 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:05:55.032297    8633 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2
	I0327 11:05:55.044996    8633 main.go:141] libmachine: STDOUT: 
	I0327 11:05:55.045098    8633 main.go:141] libmachine: STDERR: 
	I0327 11:05:55.045158    8633 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2 +20000M
	I0327 11:05:55.056410    8633 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:05:55.056501    8633 main.go:141] libmachine: STDERR: 
	I0327 11:05:55.056520    8633 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2
	I0327 11:05:55.056525    8633 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:05:55.056564    8633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:f2:73:f5:68:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/test-preload-414000/disk.qcow2
	I0327 11:05:55.058426    8633 main.go:141] libmachine: STDOUT: 
	I0327 11:05:55.058503    8633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:05:55.058516    8633 client.go:171] duration metric: took 313.581625ms to LocalClient.Create
	I0327 11:05:55.676807    8633 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0327 11:05:55.676880    8633 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 8.2911505s
	I0327 11:05:55.676916    8633 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0327 11:05:57.058888    8633 start.go:128] duration metric: took 2.375583542s to createHost
	I0327 11:05:57.058938    8633 start.go:83] releasing machines lock for "test-preload-414000", held for 2.375991708s
	W0327 11:05:57.059168    8633 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-414000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-414000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:05:57.068658    8633 out.go:177] 
	W0327 11:05:57.077773    8633 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:05:57.077824    8633 out.go:239] * 
	* 
	W0327 11:05:57.080163    8633 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:05:57.089639    8633 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-414000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-27 11:05:57.11007 -0700 PDT m=+872.917924584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-414000 -n test-preload-414000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-414000 -n test-preload-414000: exit status 7 (71.242667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-414000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-414000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-414000
--- FAIL: TestPreload (10.03s)

                                                
                                    
x
+
TestScheduledStopUnix (10s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-472000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-472000 --memory=2048 --driver=qemu2 : exit status 80 (9.816261791s)

                                                
                                                
-- stdout --
	* [scheduled-stop-472000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-472000" primary control-plane node in "scheduled-stop-472000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-472000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-472000" primary control-plane node in "scheduled-stop-472000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-27 11:06:07.106502 -0700 PDT m=+882.914421418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-472000 -n scheduled-stop-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-472000 -n scheduled-stop-472000: exit status 7 (73.092084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-472000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-472000
--- FAIL: TestScheduledStopUnix (10.00s)

                                                
                                    
x
+
TestSkaffold (16.87s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1849990480 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1849990480 version: (1.041354334s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-692000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-692000 --memory=2600 --driver=qemu2 : exit status 80 (9.868998583s)

                                                
                                                
-- stdout --
	* [skaffold-692000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-692000" primary control-plane node in "skaffold-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-692000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-692000" primary control-plane node in "skaffold-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-27 11:06:23.987015 -0700 PDT m=+899.795043334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-692000 -n skaffold-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-692000 -n skaffold-692000: exit status 7 (64.517791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-692000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-692000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-692000
--- FAIL: TestSkaffold (16.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (620.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1022881293 start -p running-upgrade-599000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1022881293 start -p running-upgrade-599000 --memory=2200 --vm-driver=qemu2 : (1m17.857457s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-599000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-599000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.74812925s)

                                                
                                                
-- stdout --
	* [running-upgrade-599000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-599000" primary control-plane node in "running-upgrade-599000" cluster
	* Updating the running qemu2 "running-upgrade-599000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:08:27.825329    9149 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:08:27.825476    9149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:08:27.825480    9149 out.go:304] Setting ErrFile to fd 2...
	I0327 11:08:27.825482    9149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:08:27.825618    9149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:08:27.826604    9149 out.go:298] Setting JSON to false
	I0327 11:08:27.844096    9149 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5880,"bootTime":1711557027,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:08:27.844183    9149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:08:27.849024    9149 out.go:177] * [running-upgrade-599000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:08:27.857002    9149 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:08:27.857052    9149 notify.go:220] Checking for updates...
	I0327 11:08:27.861978    9149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:08:27.865948    9149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:08:27.869875    9149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:08:27.877989    9149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:08:27.880980    9149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:08:27.884166    9149 config.go:182] Loaded profile config "running-upgrade-599000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:08:27.887989    9149 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 11:08:27.890961    9149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:08:27.894934    9149 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:08:27.901887    9149 start.go:297] selected driver: qemu2
	I0327 11:08:27.901892    9149 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51147 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 11:08:27.901934    9149 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:08:27.904531    9149 cni.go:84] Creating CNI manager for ""
	I0327 11:08:27.904549    9149 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:08:27.904583    9149 start.go:340] cluster config:
	{Name:running-upgrade-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51147 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 11:08:27.904636    9149 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:08:27.911811    9149 out.go:177] * Starting "running-upgrade-599000" primary control-plane node in "running-upgrade-599000" cluster
	I0327 11:08:27.915949    9149 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 11:08:27.915965    9149 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0327 11:08:27.915969    9149 cache.go:56] Caching tarball of preloaded images
	I0327 11:08:27.916025    9149 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:08:27.916031    9149 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0327 11:08:27.916081    9149 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/config.json ...
	I0327 11:08:27.916465    9149 start.go:360] acquireMachinesLock for running-upgrade-599000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:08:27.916487    9149 start.go:364] duration metric: took 17.5µs to acquireMachinesLock for "running-upgrade-599000"
	I0327 11:08:27.916496    9149 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:08:27.916499    9149 fix.go:54] fixHost starting: 
	I0327 11:08:27.917158    9149 fix.go:112] recreateIfNeeded on running-upgrade-599000: state=Running err=<nil>
	W0327 11:08:27.917166    9149 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:08:27.923899    9149 out.go:177] * Updating the running qemu2 "running-upgrade-599000" VM ...
	I0327 11:08:27.927947    9149 machine.go:94] provisionDockerMachine start ...
	I0327 11:08:27.927990    9149 main.go:141] libmachine: Using SSH client type: native
	I0327 11:08:27.928123    9149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10303dbf0] 0x103040450 <nil>  [] 0s} localhost 51115 <nil> <nil>}
	I0327 11:08:27.928128    9149 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 11:08:27.994865    9149 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-599000
	
	I0327 11:08:27.994888    9149 buildroot.go:166] provisioning hostname "running-upgrade-599000"
	I0327 11:08:27.994942    9149 main.go:141] libmachine: Using SSH client type: native
	I0327 11:08:27.995052    9149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10303dbf0] 0x103040450 <nil>  [] 0s} localhost 51115 <nil> <nil>}
	I0327 11:08:27.995060    9149 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-599000 && echo "running-upgrade-599000" | sudo tee /etc/hostname
	I0327 11:08:28.062305    9149 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-599000
	
	I0327 11:08:28.062351    9149 main.go:141] libmachine: Using SSH client type: native
	I0327 11:08:28.062458    9149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10303dbf0] 0x103040450 <nil>  [] 0s} localhost 51115 <nil> <nil>}
	I0327 11:08:28.062466    9149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-599000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-599000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-599000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 11:08:28.125852    9149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 11:08:28.125862    9149 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18517-6013/.minikube CaCertPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18517-6013/.minikube}
	I0327 11:08:28.125871    9149 buildroot.go:174] setting up certificates
	I0327 11:08:28.125876    9149 provision.go:84] configureAuth start
	I0327 11:08:28.125881    9149 provision.go:143] copyHostCerts
	I0327 11:08:28.125954    9149 exec_runner.go:144] found /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.pem, removing ...
	I0327 11:08:28.125960    9149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.pem
	I0327 11:08:28.126079    9149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.pem (1082 bytes)
	I0327 11:08:28.126249    9149 exec_runner.go:144] found /Users/jenkins/minikube-integration/18517-6013/.minikube/cert.pem, removing ...
	I0327 11:08:28.126253    9149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18517-6013/.minikube/cert.pem
	I0327 11:08:28.126292    9149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18517-6013/.minikube/cert.pem (1123 bytes)
	I0327 11:08:28.126385    9149 exec_runner.go:144] found /Users/jenkins/minikube-integration/18517-6013/.minikube/key.pem, removing ...
	I0327 11:08:28.126388    9149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18517-6013/.minikube/key.pem
	I0327 11:08:28.126422    9149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18517-6013/.minikube/key.pem (1679 bytes)
	I0327 11:08:28.126501    9149 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-599000 san=[127.0.0.1 localhost minikube running-upgrade-599000]
	I0327 11:08:28.290797    9149 provision.go:177] copyRemoteCerts
	I0327 11:08:28.290842    9149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 11:08:28.290851    9149 sshutil.go:53] new ssh client: &{IP:localhost Port:51115 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/running-upgrade-599000/id_rsa Username:docker}
	I0327 11:08:28.325451    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0327 11:08:28.332752    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0327 11:08:28.339750    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 11:08:28.346393    9149 provision.go:87] duration metric: took 220.5095ms to configureAuth
	I0327 11:08:28.346401    9149 buildroot.go:189] setting minikube options for container-runtime
	I0327 11:08:28.346500    9149 config.go:182] Loaded profile config "running-upgrade-599000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:08:28.346530    9149 main.go:141] libmachine: Using SSH client type: native
	I0327 11:08:28.346617    9149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10303dbf0] 0x103040450 <nil>  [] 0s} localhost 51115 <nil> <nil>}
	I0327 11:08:28.346623    9149 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 11:08:28.411127    9149 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 11:08:28.411135    9149 buildroot.go:70] root file system type: tmpfs
	I0327 11:08:28.411189    9149 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 11:08:28.411243    9149 main.go:141] libmachine: Using SSH client type: native
	I0327 11:08:28.411339    9149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10303dbf0] 0x103040450 <nil>  [] 0s} localhost 51115 <nil> <nil>}
	I0327 11:08:28.411372    9149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 11:08:28.478993    9149 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 11:08:28.479048    9149 main.go:141] libmachine: Using SSH client type: native
	I0327 11:08:28.479173    9149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10303dbf0] 0x103040450 <nil>  [] 0s} localhost 51115 <nil> <nil>}
	I0327 11:08:28.479181    9149 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 11:08:28.543933    9149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 11:08:28.543941    9149 machine.go:97] duration metric: took 615.992208ms to provisionDockerMachine
	I0327 11:08:28.543956    9149 start.go:293] postStartSetup for "running-upgrade-599000" (driver="qemu2")
	I0327 11:08:28.543962    9149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 11:08:28.544014    9149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 11:08:28.544025    9149 sshutil.go:53] new ssh client: &{IP:localhost Port:51115 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/running-upgrade-599000/id_rsa Username:docker}
	I0327 11:08:28.579554    9149 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 11:08:28.580902    9149 info.go:137] Remote host: Buildroot 2021.02.12
	I0327 11:08:28.580909    9149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18517-6013/.minikube/addons for local assets ...
	I0327 11:08:28.580967    9149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18517-6013/.minikube/files for local assets ...
	I0327 11:08:28.581054    9149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem -> 64762.pem in /etc/ssl/certs
	I0327 11:08:28.581139    9149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 11:08:28.583998    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem --> /etc/ssl/certs/64762.pem (1708 bytes)
	I0327 11:08:28.590388    9149 start.go:296] duration metric: took 46.426958ms for postStartSetup
	I0327 11:08:28.590404    9149 fix.go:56] duration metric: took 673.909292ms for fixHost
	I0327 11:08:28.590438    9149 main.go:141] libmachine: Using SSH client type: native
	I0327 11:08:28.590538    9149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10303dbf0] 0x103040450 <nil>  [] 0s} localhost 51115 <nil> <nil>}
	I0327 11:08:28.590543    9149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0327 11:08:28.654811    9149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711562908.418108682
	
	I0327 11:08:28.654817    9149 fix.go:216] guest clock: 1711562908.418108682
	I0327 11:08:28.654821    9149 fix.go:229] Guest: 2024-03-27 11:08:28.418108682 -0700 PDT Remote: 2024-03-27 11:08:28.590406 -0700 PDT m=+0.787787959 (delta=-172.297318ms)
	I0327 11:08:28.654832    9149 fix.go:200] guest clock delta is within tolerance: -172.297318ms
	I0327 11:08:28.654835    9149 start.go:83] releasing machines lock for "running-upgrade-599000", held for 738.34775ms
	I0327 11:08:28.654895    9149 ssh_runner.go:195] Run: cat /version.json
	I0327 11:08:28.654899    9149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 11:08:28.654902    9149 sshutil.go:53] new ssh client: &{IP:localhost Port:51115 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/running-upgrade-599000/id_rsa Username:docker}
	I0327 11:08:28.654914    9149 sshutil.go:53] new ssh client: &{IP:localhost Port:51115 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/running-upgrade-599000/id_rsa Username:docker}
	W0327 11:08:28.655438    9149 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51230->127.0.0.1:51115: write: broken pipe
	I0327 11:08:28.655453    9149 retry.go:31] will retry after 289.920588ms: ssh: handshake failed: write tcp 127.0.0.1:51230->127.0.0.1:51115: write: broken pipe
	W0327 11:08:28.987611    9149 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0327 11:08:28.987735    9149 ssh_runner.go:195] Run: systemctl --version
	I0327 11:08:28.990292    9149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 11:08:28.992728    9149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 11:08:28.992769    9149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0327 11:08:28.996595    9149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0327 11:08:29.002013    9149 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 11:08:29.002026    9149 start.go:494] detecting cgroup driver to use...
	I0327 11:08:29.002159    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 11:08:29.008716    9149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0327 11:08:29.012344    9149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 11:08:29.015443    9149 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 11:08:29.015471    9149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 11:08:29.018459    9149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 11:08:29.021530    9149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 11:08:29.024723    9149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 11:08:29.027705    9149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 11:08:29.030582    9149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 11:08:29.033756    9149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 11:08:29.037242    9149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 11:08:29.040102    9149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 11:08:29.042724    9149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 11:08:29.045719    9149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:08:29.142774    9149 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 11:08:29.154107    9149 start.go:494] detecting cgroup driver to use...
	I0327 11:08:29.154188    9149 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 11:08:29.160386    9149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 11:08:29.164848    9149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 11:08:29.170562    9149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 11:08:29.175169    9149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 11:08:29.179416    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 11:08:29.184966    9149 ssh_runner.go:195] Run: which cri-dockerd
	I0327 11:08:29.186476    9149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 11:08:29.189556    9149 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 11:08:29.194827    9149 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 11:08:29.284838    9149 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 11:08:29.373422    9149 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 11:08:29.373495    9149 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 11:08:29.378882    9149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:08:29.468730    9149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 11:08:31.776866    9149 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.308135625s)
	I0327 11:08:31.776940    9149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 11:08:31.781601    9149 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0327 11:08:31.788097    9149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 11:08:31.792736    9149 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 11:08:31.885680    9149 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 11:08:31.968955    9149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:08:32.051523    9149 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 11:08:32.057733    9149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 11:08:32.062726    9149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:08:32.145056    9149 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 11:08:32.182783    9149 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 11:08:32.182858    9149 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 11:08:32.185070    9149 start.go:562] Will wait 60s for crictl version
	I0327 11:08:32.185113    9149 ssh_runner.go:195] Run: which crictl
	I0327 11:08:32.186629    9149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 11:08:32.198136    9149 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0327 11:08:32.198196    9149 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 11:08:32.212123    9149 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 11:08:32.233444    9149 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0327 11:08:32.233513    9149 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0327 11:08:32.234915    9149 kubeadm.go:877] updating cluster {Name:running-upgrade-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51147 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-599000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0327 11:08:32.234958    9149 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 11:08:32.234994    9149 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 11:08:32.245142    9149 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 11:08:32.245149    9149 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 11:08:32.245195    9149 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 11:08:32.248804    9149 ssh_runner.go:195] Run: which lz4
	I0327 11:08:32.250086    9149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0327 11:08:32.251499    9149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 11:08:32.251508    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0327 11:08:32.959894    9149 docker.go:649] duration metric: took 709.840667ms to copy over tarball
	I0327 11:08:32.959953    9149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 11:08:34.236090    9149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.276133166s)
	I0327 11:08:34.236102    9149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 11:08:34.251471    9149 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 11:08:34.254282    9149 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0327 11:08:34.259485    9149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:08:34.345611    9149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 11:08:35.779864    9149 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.434248208s)
	I0327 11:08:35.779960    9149 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 11:08:35.795731    9149 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 11:08:35.795740    9149 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 11:08:35.795746    9149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0327 11:08:35.804968    9149 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:08:35.804984    9149 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:08:35.805145    9149 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 11:08:35.805195    9149 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:08:35.805274    9149 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:08:35.805358    9149 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:08:35.805694    9149 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:08:35.805886    9149 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:08:35.814229    9149 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:08:35.814328    9149 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:08:35.814331    9149 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:08:35.814396    9149 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:08:35.814496    9149 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 11:08:35.814511    9149 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:08:35.814543    9149 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:08:35.814600    9149 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0327 11:08:37.743863    9149 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 11:08:37.744502    9149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:08:37.779515    9149 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0327 11:08:37.779590    9149 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:08:37.779685    9149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:08:37.801751    9149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:08:37.804363    9149 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 11:08:37.804491    9149 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0327 11:08:37.816811    9149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0327 11:08:37.816834    9149 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:08:37.816864    9149 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0327 11:08:37.816884    9149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:08:37.816882    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0327 11:08:37.836557    9149 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0327 11:08:37.848227    9149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0327 11:08:37.866506    9149 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0327 11:08:37.866521    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0327 11:08:37.866630    9149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:08:37.868911    9149 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0327 11:08:37.868926    9149 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:08:37.868953    9149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0327 11:08:37.889000    9149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:08:37.896284    9149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:08:37.899919    9149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0327 11:08:37.932460    9149 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0327 11:08:37.932492    9149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0327 11:08:37.932507    9149 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:08:37.932510    9149 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0327 11:08:37.932554    9149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:08:37.936267    9149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0327 11:08:37.936284    9149 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:08:37.936328    9149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:08:37.936390    9149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0327 11:08:37.936400    9149 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:08:37.936405    9149 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0327 11:08:37.936414    9149 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0327 11:08:37.936417    9149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:08:37.936443    9149 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0327 11:08:37.943830    9149 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0327 11:08:37.963187    9149 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0327 11:08:37.963192    9149 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0327 11:08:37.963340    9149 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 11:08:37.963426    9149 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0327 11:08:37.965003    9149 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0327 11:08:37.965012    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0327 11:08:37.972447    9149 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0327 11:08:37.972456    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0327 11:08:38.002952    9149 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0327 11:08:38.395939    9149 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 11:08:38.396225    9149 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:08:38.435821    9149 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0327 11:08:38.435844    9149 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:08:38.435912    9149 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:08:39.327023    9149 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 11:08:39.327465    9149 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0327 11:08:39.333637    9149 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0327 11:08:39.333694    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0327 11:08:39.388874    9149 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0327 11:08:39.388889    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0327 11:08:39.622715    9149 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0327 11:08:39.622763    9149 cache_images.go:92] duration metric: took 3.827034875s to LoadCachedImages
	W0327 11:08:39.622799    9149 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0327 11:08:39.622805    9149 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0327 11:08:39.622857    9149 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-599000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 11:08:39.622947    9149 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 11:08:39.635932    9149 cni.go:84] Creating CNI manager for ""
	I0327 11:08:39.635943    9149 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:08:39.635948    9149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 11:08:39.635956    9149 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-599000 NodeName:running-upgrade-599000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 11:08:39.636025    9149 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-599000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 11:08:39.636090    9149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0327 11:08:39.639081    9149 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 11:08:39.639111    9149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 11:08:39.642458    9149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0327 11:08:39.647608    9149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 11:08:39.652691    9149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0327 11:08:39.658255    9149 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0327 11:08:39.659531    9149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:08:39.732852    9149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 11:08:39.738308    9149 certs.go:68] Setting up /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000 for IP: 10.0.2.15
	I0327 11:08:39.738315    9149 certs.go:194] generating shared ca certs ...
	I0327 11:08:39.738326    9149 certs.go:226] acquiring lock for ca certs: {Name:mk7b549dba732db56799e548b1d7e96b357996ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:08:39.738542    9149 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.key
	I0327 11:08:39.738578    9149 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/proxy-client-ca.key
	I0327 11:08:39.738583    9149 certs.go:256] generating profile certs ...
	I0327 11:08:39.738639    9149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/client.key
	I0327 11:08:39.738650    9149 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.key.1a894048
	I0327 11:08:39.738658    9149 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.crt.1a894048 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0327 11:08:39.818890    9149 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.crt.1a894048 ...
	I0327 11:08:39.818899    9149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.crt.1a894048: {Name:mk9315a4345bb16d9d499e85511fc08f4bc0dc5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:08:39.819144    9149 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.key.1a894048 ...
	I0327 11:08:39.819149    9149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.key.1a894048: {Name:mk836862b7a42837102d672add18c4bd3b1c5395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:08:39.819262    9149 certs.go:381] copying /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.crt.1a894048 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.crt
	I0327 11:08:39.819440    9149 certs.go:385] copying /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.key.1a894048 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.key
	I0327 11:08:39.819585    9149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/proxy-client.key
	I0327 11:08:39.819694    9149 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/6476.pem (1338 bytes)
	W0327 11:08:39.819715    9149 certs.go:480] ignoring /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/6476_empty.pem, impossibly tiny 0 bytes
	I0327 11:08:39.819722    9149 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 11:08:39.819741    9149 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem (1082 bytes)
	I0327 11:08:39.819758    9149 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem (1123 bytes)
	I0327 11:08:39.819773    9149 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/key.pem (1679 bytes)
	I0327 11:08:39.819810    9149 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem (1708 bytes)
	I0327 11:08:39.820138    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 11:08:39.827446    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 11:08:39.834714    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 11:08:39.842228    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 11:08:39.849694    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 11:08:39.856636    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 11:08:39.863632    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 11:08:39.870290    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 11:08:39.877565    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 11:08:39.884849    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/6476.pem --> /usr/share/ca-certificates/6476.pem (1338 bytes)
	I0327 11:08:39.892043    9149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem --> /usr/share/ca-certificates/64762.pem (1708 bytes)
	I0327 11:08:39.898657    9149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 11:08:39.903335    9149 ssh_runner.go:195] Run: openssl version
	I0327 11:08:39.905146    9149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 11:08:39.908654    9149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 11:08:39.910187    9149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 18:08 /usr/share/ca-certificates/minikubeCA.pem
	I0327 11:08:39.910206    9149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 11:08:39.911987    9149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 11:08:39.914917    9149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6476.pem && ln -fs /usr/share/ca-certificates/6476.pem /etc/ssl/certs/6476.pem"
	I0327 11:08:39.917967    9149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6476.pem
	I0327 11:08:39.919346    9149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 17:55 /usr/share/ca-certificates/6476.pem
	I0327 11:08:39.919367    9149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6476.pem
	I0327 11:08:39.921158    9149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6476.pem /etc/ssl/certs/51391683.0"
	I0327 11:08:39.924329    9149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64762.pem && ln -fs /usr/share/ca-certificates/64762.pem /etc/ssl/certs/64762.pem"
	I0327 11:08:39.927700    9149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64762.pem
	I0327 11:08:39.929218    9149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 17:55 /usr/share/ca-certificates/64762.pem
	I0327 11:08:39.929236    9149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64762.pem
	I0327 11:08:39.931164    9149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64762.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 11:08:39.933842    9149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 11:08:39.935323    9149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 11:08:39.937201    9149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 11:08:39.939093    9149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 11:08:39.940840    9149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 11:08:39.942694    9149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 11:08:39.944429    9149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 11:08:39.946292    9149 kubeadm.go:391] StartCluster: {Name:running-upgrade-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51147 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-599000 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 11:08:39.946360    9149 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 11:08:39.956714    9149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 11:08:39.960826    9149 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 11:08:39.960835    9149 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 11:08:39.960838    9149 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 11:08:39.960861    9149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 11:08:39.964011    9149 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 11:08:39.964052    9149 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-599000" does not appear in /Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:08:39.964067    9149 kubeconfig.go:62] /Users/jenkins/minikube-integration/18517-6013/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-599000" cluster setting kubeconfig missing "running-upgrade-599000" context setting]
	I0327 11:08:39.964233    9149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/kubeconfig: {Name:mkbb2a293ed6f55faa985236f49efd479958575e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:08:39.964876    9149 kapi.go:59] client config for running-upgrade-599000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/client.key", CAFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10432eb90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 11:08:39.965681    9149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 11:08:39.968437    9149 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-599000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0327 11:08:39.968442    9149 kubeadm.go:1154] stopping kube-system containers ...
	I0327 11:08:39.968477    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 11:08:39.978914    9149 docker.go:483] Stopping containers: [4566f84abfbf e75cf76ce25f 728d15925f61 e968e8f3d396 80d88951e124 6adb3ee47838 11524bd3206a 170368f0be1d 7e60cccbe5e9 0ab6d993501a 1ea611d5a6b9 cdb5384beea9]
	I0327 11:08:39.978968    9149 ssh_runner.go:195] Run: docker stop 4566f84abfbf e75cf76ce25f 728d15925f61 e968e8f3d396 80d88951e124 6adb3ee47838 11524bd3206a 170368f0be1d 7e60cccbe5e9 0ab6d993501a 1ea611d5a6b9 cdb5384beea9
	I0327 11:08:39.990113    9149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 11:08:40.073302    9149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 11:08:40.077349    9149 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 27 18:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar 27 18:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 27 18:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 27 18:08 /etc/kubernetes/scheduler.conf
	
	I0327 11:08:40.077389    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/admin.conf
	I0327 11:08:40.080593    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 11:08:40.080622    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 11:08:40.083936    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/kubelet.conf
	I0327 11:08:40.086543    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 11:08:40.086573    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 11:08:40.089117    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/controller-manager.conf
	I0327 11:08:40.091968    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 11:08:40.091989    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 11:08:40.094557    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/scheduler.conf
	I0327 11:08:40.096981    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 11:08:40.097001    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 11:08:40.100000    9149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 11:08:40.102837    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:08:40.123879    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:08:40.828363    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:08:41.077705    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:08:41.130479    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:08:41.173877    9149 api_server.go:52] waiting for apiserver process to appear ...
	I0327 11:08:41.173955    9149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:08:41.676009    9149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:08:42.175996    9149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:08:42.180216    9149 api_server.go:72] duration metric: took 1.006347583s to wait for apiserver process to appear ...
	I0327 11:08:42.180224    9149 api_server.go:88] waiting for apiserver healthz status ...
	I0327 11:08:42.180254    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:08:47.181650    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:08:47.181693    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:08:52.182303    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:08:52.182343    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:08:57.182831    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:08:57.182922    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:02.183838    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:02.183907    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:07.184953    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:07.185020    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:12.186383    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:12.186464    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:17.188099    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:17.188173    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:22.190650    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:22.190727    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:27.191495    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:27.191542    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:32.193858    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:32.193982    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:37.196482    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:37.196535    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:42.198930    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:42.199301    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:09:42.236062    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:09:42.236214    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:09:42.258495    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:09:42.258610    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:09:42.273450    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:09:42.273527    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:09:42.290060    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:09:42.290125    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:09:42.300839    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:09:42.300906    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:09:42.315026    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:09:42.315099    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:09:42.325021    9149 logs.go:276] 0 containers: []
	W0327 11:09:42.325035    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:09:42.325092    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:09:42.335385    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:09:42.335400    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:09:42.335406    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:09:42.351468    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:09:42.351482    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:09:42.376649    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:09:42.376658    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:09:42.411411    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:09:42.411419    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:09:42.436683    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:09:42.436693    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:09:42.447370    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:09:42.447381    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:09:42.462456    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:09:42.462470    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:09:42.474363    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:09:42.474376    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:09:42.543533    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:09:42.543545    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:09:42.557400    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:09:42.557409    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:09:42.574250    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:09:42.574261    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:09:42.589206    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:09:42.589217    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:09:42.611065    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:09:42.611074    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:09:42.623215    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:09:42.623227    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:09:42.635141    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:09:42.635151    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:09:42.639987    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:09:42.639992    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:09:45.155473    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:50.157894    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:50.158342    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:09:50.197920    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:09:50.198046    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:09:50.219915    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:09:50.220016    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:09:50.239240    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:09:50.239318    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:09:50.251498    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:09:50.251575    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:09:50.262649    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:09:50.262718    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:09:50.273531    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:09:50.273594    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:09:50.286081    9149 logs.go:276] 0 containers: []
	W0327 11:09:50.286091    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:09:50.286145    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:09:50.296590    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:09:50.296616    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:09:50.296621    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:09:50.331438    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:09:50.331445    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:09:50.356052    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:09:50.356064    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:09:50.374500    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:09:50.374509    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:09:50.399933    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:09:50.399943    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:09:50.411547    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:09:50.411563    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:09:50.423700    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:09:50.423719    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:09:50.435490    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:09:50.435501    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:09:50.471765    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:09:50.471776    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:09:50.488919    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:09:50.488930    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:09:50.504111    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:09:50.504119    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:09:50.515659    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:09:50.515670    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:09:50.519917    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:09:50.519924    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:09:50.533902    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:09:50.533910    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:09:50.545983    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:09:50.545994    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:09:50.557375    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:09:50.557383    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:09:53.083940    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:09:58.086421    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:09:58.086819    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:09:58.124692    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:09:58.124817    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:09:58.145394    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:09:58.145494    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:09:58.160294    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:09:58.160365    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:09:58.174162    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:09:58.174226    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:09:58.190047    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:09:58.190118    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:09:58.200740    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:09:58.200808    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:09:58.210660    9149 logs.go:276] 0 containers: []
	W0327 11:09:58.210670    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:09:58.210722    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:09:58.221367    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:09:58.221384    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:09:58.221390    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:09:58.236393    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:09:58.236405    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:09:58.248239    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:09:58.248248    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:09:58.260285    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:09:58.260295    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:09:58.271565    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:09:58.271577    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:09:58.285773    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:09:58.285784    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:09:58.321102    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:09:58.321114    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:09:58.335256    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:09:58.335269    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:09:58.359382    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:09:58.359393    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:09:58.370354    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:09:58.370364    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:09:58.375336    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:09:58.375341    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:09:58.389061    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:09:58.389070    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:09:58.424976    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:09:58.424986    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:09:58.439827    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:09:58.439837    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:09:58.458542    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:09:58.458554    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:09:58.475284    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:09:58.475294    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:10:01.001636    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:10:06.003772    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:10:06.003957    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:10:06.029651    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:10:06.029723    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:10:06.043289    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:10:06.043354    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:10:06.061222    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:10:06.061307    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:10:06.072927    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:10:06.072997    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:10:06.084863    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:10:06.084929    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:10:06.094863    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:10:06.094919    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:10:06.104982    9149 logs.go:276] 0 containers: []
	W0327 11:10:06.104993    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:10:06.105047    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:10:06.115158    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:10:06.115179    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:10:06.115186    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:10:06.130001    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:10:06.130015    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:10:06.153189    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:10:06.153200    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:10:06.169718    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:10:06.169728    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:10:06.181853    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:10:06.181866    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:10:06.194946    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:10:06.194957    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:10:06.199697    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:10:06.199702    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:10:06.217016    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:10:06.217029    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:10:06.235585    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:10:06.235594    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:10:06.272166    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:10:06.272175    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:10:06.291243    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:10:06.291255    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:10:06.302394    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:10:06.302406    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:10:06.314738    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:10:06.314750    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:10:06.349191    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:10:06.349200    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:10:06.363945    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:10:06.363956    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:10:06.375900    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:10:06.375909    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:10:08.902550    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:10:13.904342    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:10:13.904777    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:10:13.943615    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:10:13.943733    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:10:13.965544    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:10:13.965637    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:10:13.981040    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:10:13.981101    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:10:13.993774    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:10:13.993831    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:10:14.004695    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:10:14.004759    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:10:14.015280    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:10:14.015335    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:10:14.025654    9149 logs.go:276] 0 containers: []
	W0327 11:10:14.025665    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:10:14.025722    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:10:14.036209    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:10:14.036227    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:10:14.036232    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:10:14.048060    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:10:14.048069    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:10:14.073190    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:10:14.073200    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:10:14.077859    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:10:14.077865    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:10:14.107337    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:10:14.107348    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:10:14.121251    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:10:14.121263    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:10:14.135014    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:10:14.135025    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:10:14.150171    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:10:14.150183    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:10:14.168471    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:10:14.168481    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:10:14.181006    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:10:14.181017    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:10:14.217138    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:10:14.217145    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:10:14.251417    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:10:14.251429    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:10:14.263059    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:10:14.263071    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:10:14.274643    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:10:14.274651    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:10:14.287371    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:10:14.287382    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:10:14.301688    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:10:14.301699    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:10:16.815046    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:10:21.817659    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:10:21.818055    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:10:21.856988    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:10:21.857121    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:10:21.878997    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:10:21.879108    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:10:21.894699    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:10:21.894785    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:10:21.907650    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:10:21.907717    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:10:21.919005    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:10:21.919070    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:10:21.929831    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:10:21.929894    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:10:21.939780    9149 logs.go:276] 0 containers: []
	W0327 11:10:21.939793    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:10:21.939842    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:10:21.950236    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:10:21.950253    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:10:21.950259    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:10:21.964183    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:10:21.964195    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:10:21.976768    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:10:21.976778    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:10:21.992888    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:10:21.992901    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:10:22.029329    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:10:22.029340    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:10:22.033584    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:10:22.033592    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:10:22.044810    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:10:22.044823    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:10:22.059988    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:10:22.060002    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:10:22.071830    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:10:22.071839    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:10:22.096554    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:10:22.096564    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:10:22.131660    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:10:22.131672    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:10:22.155507    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:10:22.155521    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:10:22.167307    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:10:22.167319    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:10:22.181418    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:10:22.181431    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:10:22.197250    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:10:22.197259    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:10:22.209271    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:10:22.209280    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:10:24.727665    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:10:29.730250    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:10:29.730354    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:10:29.745390    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:10:29.745483    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:10:29.757055    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:10:29.757128    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:10:29.768190    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:10:29.768258    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:10:29.780798    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:10:29.780869    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:10:29.792555    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:10:29.792633    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:10:29.803351    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:10:29.803415    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:10:29.814651    9149 logs.go:276] 0 containers: []
	W0327 11:10:29.814665    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:10:29.814724    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:10:29.826323    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:10:29.826341    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:10:29.826347    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:10:29.839656    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:10:29.839667    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:10:29.852039    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:10:29.852052    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:10:29.890193    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:10:29.890210    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:10:29.905618    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:10:29.905630    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:10:29.921414    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:10:29.921423    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:10:29.933579    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:10:29.933595    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:10:29.959081    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:10:29.959100    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:10:29.963560    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:10:29.963570    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:10:29.975919    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:10:29.975932    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:10:29.995190    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:10:29.995200    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:10:30.012511    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:10:30.012523    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:10:30.037884    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:10:30.037905    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:10:30.053736    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:10:30.053749    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:10:30.067034    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:10:30.067048    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:10:30.079610    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:10:30.079623    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:10:32.619088    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:10:37.621817    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:10:37.621975    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:10:37.635243    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:10:37.635324    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:10:37.645835    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:10:37.645906    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:10:37.656565    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:10:37.656630    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:10:37.667369    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:10:37.667438    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:10:37.678122    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:10:37.678187    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:10:37.688982    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:10:37.689044    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:10:37.699303    9149 logs.go:276] 0 containers: []
	W0327 11:10:37.699314    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:10:37.699365    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:10:37.709897    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:10:37.709914    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:10:37.709919    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:10:37.722074    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:10:37.722084    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:10:37.737189    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:10:37.737200    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:10:37.754909    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:10:37.754919    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:10:37.767693    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:10:37.767704    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:10:37.779186    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:10:37.779195    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:10:37.784228    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:10:37.784235    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:10:37.802489    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:10:37.802498    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:10:37.814334    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:10:37.814345    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:10:37.850069    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:10:37.850077    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:10:37.886797    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:10:37.886808    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:10:37.911966    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:10:37.911976    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:10:37.936306    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:10:37.936315    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:10:37.951370    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:10:37.951380    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:10:37.965312    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:10:37.965321    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:10:37.979557    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:10:37.979567    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:10:40.494603    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:10:45.496835    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:10:45.497137    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:10:45.526537    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:10:45.526663    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:10:45.546339    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:10:45.546443    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:10:45.568147    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:10:45.568217    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:10:45.578697    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:10:45.578765    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:10:45.588877    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:10:45.588934    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:10:45.599324    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:10:45.599390    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:10:45.609572    9149 logs.go:276] 0 containers: []
	W0327 11:10:45.609581    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:10:45.609632    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:10:45.619698    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:10:45.619715    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:10:45.619720    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:10:45.633631    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:10:45.633643    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:10:45.644536    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:10:45.644550    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:10:45.661737    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:10:45.661747    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:10:45.666243    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:10:45.666248    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:10:45.701187    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:10:45.701198    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:10:45.714754    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:10:45.714766    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:10:45.726451    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:10:45.726461    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:10:45.738845    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:10:45.738856    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:10:45.755068    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:10:45.755081    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:10:45.770041    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:10:45.770049    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:10:45.781335    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:10:45.781344    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:10:45.805361    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:10:45.805367    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:10:45.817043    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:10:45.817054    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:10:45.852611    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:10:45.852619    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:10:45.866156    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:10:45.866169    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:10:48.390405    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:10:53.393125    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:10:53.393261    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:10:53.410303    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:10:53.410386    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:10:53.423960    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:10:53.424032    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:10:53.436154    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:10:53.436219    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:10:53.451900    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:10:53.451961    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:10:53.462386    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:10:53.462451    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:10:53.473112    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:10:53.473177    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:10:53.483017    9149 logs.go:276] 0 containers: []
	W0327 11:10:53.483029    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:10:53.483088    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:10:53.493180    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:10:53.493202    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:10:53.493208    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:10:53.497923    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:10:53.497930    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:10:53.533373    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:10:53.533384    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:10:53.545719    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:10:53.545730    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:10:53.557767    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:10:53.557779    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:10:53.575047    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:10:53.575059    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:10:53.586744    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:10:53.586754    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:10:53.610169    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:10:53.610178    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:10:53.645595    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:10:53.645605    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:10:53.659660    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:10:53.659673    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:10:53.674728    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:10:53.674738    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:10:53.686394    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:10:53.686405    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:10:53.700047    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:10:53.700059    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:10:53.723660    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:10:53.723670    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:10:53.738020    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:10:53.738029    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:10:53.752467    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:10:53.752477    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:10:56.265781    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:01.266589    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:01.266979    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:01.306625    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:01.306767    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:01.333683    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:01.333770    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:01.347148    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:01.347215    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:01.358724    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:01.358796    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:01.371268    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:01.371327    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:01.381668    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:01.381746    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:01.396826    9149 logs.go:276] 0 containers: []
	W0327 11:11:01.396838    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:01.396894    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:01.413585    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:01.413601    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:01.413606    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:01.433037    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:01.433047    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:01.444711    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:01.444721    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:01.457337    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:01.457346    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:01.472939    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:01.472951    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:01.490642    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:01.490653    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:01.506024    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:01.506035    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:01.540334    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:01.540345    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:01.564353    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:01.564363    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:01.578234    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:01.578244    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:01.589917    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:01.589926    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:01.601682    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:01.601692    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:01.625392    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:01.625402    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:01.629660    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:01.629666    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:01.645137    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:01.645148    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:01.657050    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:01.657061    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:04.194615    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:09.197001    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:09.197242    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:09.222005    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:09.222129    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:09.238985    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:09.239081    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:09.252172    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:09.252243    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:09.263773    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:09.263832    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:09.275263    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:09.275336    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:09.286435    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:09.286503    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:09.296567    9149 logs.go:276] 0 containers: []
	W0327 11:11:09.296579    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:09.296635    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:09.307000    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:09.307018    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:09.307025    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:09.344773    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:09.344785    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:09.356607    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:09.356618    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:09.374606    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:09.374616    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:09.399408    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:09.399416    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:09.435428    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:09.435439    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:09.447461    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:09.447472    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:09.459319    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:09.459334    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:09.477011    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:09.477024    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:09.481846    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:09.481859    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:09.505416    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:09.505429    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:09.530757    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:09.530773    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:09.544466    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:09.544480    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:09.559913    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:09.559926    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:09.576048    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:09.576061    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:09.589344    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:09.589356    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:12.109547    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:17.111809    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:17.112028    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:17.131877    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:17.131972    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:17.148173    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:17.148243    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:17.160231    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:17.160291    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:17.171454    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:17.171530    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:17.183085    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:17.183147    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:17.193865    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:17.193930    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:17.206075    9149 logs.go:276] 0 containers: []
	W0327 11:11:17.206085    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:17.206141    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:17.216611    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:17.216629    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:17.216634    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:17.241216    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:17.241229    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:17.257877    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:17.257887    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:17.270379    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:17.270389    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:17.305208    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:17.305222    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:17.309503    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:17.309511    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:17.323020    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:17.323031    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:17.337432    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:17.337442    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:17.350490    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:17.350502    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:17.365453    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:17.365464    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:17.382978    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:17.382989    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:17.394812    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:17.394824    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:17.429335    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:17.429346    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:17.443463    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:17.443472    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:17.457776    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:17.457787    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:17.469410    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:17.469422    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:19.995387    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:24.997890    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:24.998292    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:25.036439    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:25.036586    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:25.061172    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:25.061262    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:25.075350    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:25.075425    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:25.090850    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:25.090916    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:25.102265    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:25.102326    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:25.113316    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:25.113390    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:25.123276    9149 logs.go:276] 0 containers: []
	W0327 11:11:25.123290    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:25.123348    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:25.134026    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:25.134044    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:25.134049    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:25.138704    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:25.138714    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:25.174620    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:25.174632    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:25.190116    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:25.190126    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:25.201916    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:25.201928    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:25.238969    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:25.238979    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:25.262139    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:25.262145    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:25.274586    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:25.274598    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:25.286881    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:25.286890    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:25.303714    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:25.303724    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:25.317593    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:25.317607    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:25.355635    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:25.355648    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:25.370203    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:25.370216    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:25.387518    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:25.387528    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:25.399301    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:25.399311    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:25.410896    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:25.410906    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:27.924843    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:32.927496    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:32.927662    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:32.940032    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:32.940101    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:32.950715    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:32.950784    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:32.961769    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:32.961831    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:32.972307    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:32.972375    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:32.982554    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:32.982617    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:32.993278    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:32.993336    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:33.003241    9149 logs.go:276] 0 containers: []
	W0327 11:11:33.003249    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:33.003297    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:33.014034    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:33.014048    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:33.014053    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:33.051927    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:33.051942    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:33.069662    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:33.069672    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:33.082157    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:33.082169    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:33.094287    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:33.094297    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:33.098856    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:33.098863    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:33.133689    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:33.133700    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:33.153914    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:33.153923    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:33.165522    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:33.165532    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:33.177328    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:33.177340    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:33.196645    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:33.196653    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:33.220064    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:33.220074    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:33.235917    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:33.235927    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:33.247538    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:33.247552    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:33.261798    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:33.261811    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:33.277034    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:33.277048    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:35.803427    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:40.805642    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:40.805735    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:40.817128    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:40.817194    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:40.828674    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:40.828742    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:40.839442    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:40.839503    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:40.850754    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:40.850810    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:40.871388    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:40.871447    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:40.887397    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:40.887455    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:40.902803    9149 logs.go:276] 0 containers: []
	W0327 11:11:40.902819    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:40.902879    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:40.913592    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:40.913609    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:40.913614    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:40.938401    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:40.938412    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:40.952749    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:40.952759    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:40.965281    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:40.965290    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:40.989012    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:40.989021    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:41.001826    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:41.001835    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:41.006391    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:41.006400    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:41.017744    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:41.017755    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:41.035450    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:41.035461    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:41.083411    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:41.083421    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:41.097900    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:41.097909    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:41.113042    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:41.113052    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:41.129155    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:41.129164    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:41.144202    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:41.144212    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:41.180988    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:41.180996    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:41.192609    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:41.192619    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:43.706088    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:48.706893    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:48.706973    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:48.719351    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:48.719420    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:48.731340    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:48.731414    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:48.746303    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:48.746371    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:48.758339    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:48.758419    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:48.770169    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:48.770242    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:48.782572    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:48.782640    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:48.793919    9149 logs.go:276] 0 containers: []
	W0327 11:11:48.793930    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:48.793987    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:48.806115    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:48.806132    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:48.806138    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:48.832373    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:48.832387    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:48.846220    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:48.846234    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:48.866377    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:48.866389    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:48.903321    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:48.903338    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:48.918592    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:48.918604    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:48.933953    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:48.933969    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:48.946989    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:48.947001    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:48.960699    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:48.960712    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:48.973972    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:48.973985    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:48.998520    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:48.998533    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:49.003520    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:49.003533    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:49.018180    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:49.018192    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:49.036953    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:49.036967    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:49.053961    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:49.053972    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:49.092293    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:49.092305    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:51.607899    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:56.610656    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:56.610933    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:56.634677    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:56.634792    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:56.651287    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:56.651375    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:56.664272    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:56.664347    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:56.678839    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:56.678911    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:56.689495    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:56.689571    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:56.699430    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:56.699522    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:56.710809    9149 logs.go:276] 0 containers: []
	W0327 11:11:56.710819    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:56.710872    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:56.723300    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:56.723319    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:56.723326    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:56.739537    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:56.739549    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:56.753159    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:56.753173    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:56.766485    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:56.766499    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:56.779403    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:56.779430    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:56.817911    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:56.817934    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:56.849853    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:56.849888    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:56.868930    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:56.868955    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:56.883551    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:56.883562    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:56.897989    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:56.898000    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:56.910153    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:56.910166    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:56.930326    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:56.930338    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:56.942072    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:56.942085    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:56.959338    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:56.959349    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:56.985003    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:56.985020    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:56.989914    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:56.989925    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:59.532861    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:04.535065    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:04.535179    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:04.546749    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:04.546813    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:04.557681    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:04.557745    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:04.568380    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:04.568446    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:04.578900    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:04.578964    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:04.589863    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:04.589923    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:04.600378    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:04.600450    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:04.610680    9149 logs.go:276] 0 containers: []
	W0327 11:12:04.610690    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:04.610742    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:04.621105    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:04.621122    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:04.621127    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:04.655902    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:04.655916    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:04.683685    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:04.683698    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:04.698604    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:04.698618    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:04.713405    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:04.713415    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:04.724890    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:04.724901    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:04.737510    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:04.737523    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:04.773395    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:04.773403    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:04.777451    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:04.777457    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:04.794725    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:04.794735    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:04.806855    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:04.806866    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:04.818201    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:04.818212    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:04.833680    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:04.833690    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:04.845981    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:04.845990    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:04.862349    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:04.862359    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:04.886869    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:04.886879    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:07.405372    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:12.407551    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:12.407651    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:12.420484    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:12.420558    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:12.432521    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:12.432598    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:12.444000    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:12.444069    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:12.455598    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:12.455675    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:12.465770    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:12.465837    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:12.476458    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:12.476528    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:12.486430    9149 logs.go:276] 0 containers: []
	W0327 11:12:12.486439    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:12.486489    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:12.501047    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:12.501063    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:12.501068    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:12.516810    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:12.516824    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:12.529144    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:12.529154    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:12.542889    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:12.542898    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:12.567277    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:12.567288    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:12.580934    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:12.580947    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:12.595729    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:12.595740    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:12.612715    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:12.612726    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:12.635235    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:12.635242    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:12.671064    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:12.671077    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:12.675624    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:12.675633    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:12.687069    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:12.687080    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:12.698621    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:12.698634    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:12.734550    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:12.734565    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:12.749089    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:12.749100    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:12.760314    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:12.760325    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:15.273749    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:20.274118    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:20.274446    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:20.295417    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:20.295518    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:20.311061    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:20.311135    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:20.323157    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:20.323224    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:20.333757    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:20.333822    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:20.348859    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:20.348925    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:20.359873    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:20.359934    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:20.370417    9149 logs.go:276] 0 containers: []
	W0327 11:12:20.370427    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:20.370478    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:20.380883    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:20.380900    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:20.380905    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:20.385665    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:20.385671    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:20.397174    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:20.397187    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:20.408525    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:20.408534    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:20.422232    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:20.422246    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:20.436922    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:20.436937    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:20.449046    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:20.449059    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:20.466477    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:20.466487    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:20.490355    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:20.490366    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:20.517202    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:20.517217    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:20.528379    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:20.528389    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:20.539418    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:20.539428    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:20.551444    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:20.551458    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:20.585836    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:20.585845    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:20.621087    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:20.621101    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:20.634751    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:20.634764    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:23.152501    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:28.154418    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:28.154601    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:28.172614    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:28.172700    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:28.185516    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:28.185593    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:28.196564    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:28.196631    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:28.207563    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:28.207635    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:28.218070    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:28.218131    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:28.228527    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:28.228592    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:28.238827    9149 logs.go:276] 0 containers: []
	W0327 11:12:28.238841    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:28.238897    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:28.249553    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:28.249571    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:28.249577    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:28.254029    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:28.254037    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:28.267135    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:28.267146    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:28.279321    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:28.279335    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:28.294021    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:28.294031    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:28.315159    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:28.315170    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:28.330227    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:28.330239    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:28.367431    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:28.367439    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:28.381942    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:28.381953    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:28.405759    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:28.405769    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:28.419588    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:28.419599    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:28.431579    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:28.431590    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:28.443893    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:28.443903    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:28.479472    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:28.479482    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:28.501713    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:28.501725    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:28.515155    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:28.515165    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:31.029396    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:36.031773    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:36.031979    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:36.051226    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:36.051324    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:36.065197    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:36.065263    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:36.090300    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:36.090371    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:36.101181    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:36.101248    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:36.111539    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:36.111611    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:36.121771    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:36.121843    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:36.132020    9149 logs.go:276] 0 containers: []
	W0327 11:12:36.132033    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:36.132087    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:36.142356    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:36.142372    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:36.142377    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:36.156528    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:36.156540    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:36.168197    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:36.168209    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:36.172671    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:36.172680    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:36.207327    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:36.207341    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:36.218906    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:36.218918    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:36.234787    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:36.234799    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:36.270602    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:36.270611    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:36.289158    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:36.289170    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:36.300559    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:36.300571    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:36.312791    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:36.312802    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:36.324767    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:36.324778    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:36.339659    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:36.339670    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:36.357420    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:36.357430    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:36.380358    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:36.380365    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:36.394527    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:36.394540    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:38.919598    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:43.922272    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:43.922356    9149 kubeadm.go:591] duration metric: took 4m3.963085541s to restartPrimaryControlPlane
	W0327 11:12:43.922426    9149 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 11:12:43.922456    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 11:12:44.965867    9149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0434055s)
	I0327 11:12:44.965923    9149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 11:12:44.970738    9149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 11:12:44.973513    9149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 11:12:44.976101    9149 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 11:12:44.976107    9149 kubeadm.go:156] found existing configuration files:
	
	I0327 11:12:44.976132    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/admin.conf
	I0327 11:12:44.978778    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 11:12:44.978803    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 11:12:44.981172    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/kubelet.conf
	I0327 11:12:44.983892    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 11:12:44.983910    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 11:12:44.986960    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/controller-manager.conf
	I0327 11:12:44.989588    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 11:12:44.989609    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 11:12:44.992464    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/scheduler.conf
	I0327 11:12:44.996085    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 11:12:44.996131    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 11:12:44.999704    9149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 11:12:45.018103    9149 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 11:12:45.018214    9149 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 11:12:45.065986    9149 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 11:12:45.066044    9149 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 11:12:45.066100    9149 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 11:12:45.117155    9149 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 11:12:45.121543    9149 out.go:204]   - Generating certificates and keys ...
	I0327 11:12:45.121578    9149 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 11:12:45.121610    9149 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 11:12:45.121645    9149 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 11:12:45.121722    9149 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 11:12:45.121761    9149 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 11:12:45.121788    9149 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 11:12:45.121871    9149 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 11:12:45.121954    9149 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 11:12:45.122006    9149 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 11:12:45.122089    9149 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 11:12:45.122113    9149 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 11:12:45.122140    9149 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 11:12:45.211182    9149 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 11:12:45.357171    9149 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 11:12:45.527405    9149 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 11:12:45.566980    9149 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 11:12:45.600100    9149 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 11:12:45.600646    9149 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 11:12:45.600668    9149 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 11:12:45.676139    9149 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 11:12:45.680009    9149 out.go:204]   - Booting up control plane ...
	I0327 11:12:45.680061    9149 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 11:12:45.680113    9149 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 11:12:45.680148    9149 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 11:12:45.680205    9149 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 11:12:45.680870    9149 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 11:12:50.184328    9149 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502751 seconds
	I0327 11:12:50.184399    9149 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 11:12:50.187905    9149 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 11:12:50.703160    9149 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 11:12:50.703352    9149 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-599000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 11:12:51.207533    9149 kubeadm.go:309] [bootstrap-token] Using token: 644p81.qjqgd0r862w3b1j6
	I0327 11:12:51.209720    9149 out.go:204]   - Configuring RBAC rules ...
	I0327 11:12:51.209781    9149 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 11:12:51.209884    9149 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 11:12:51.216350    9149 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 11:12:51.217375    9149 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 11:12:51.218278    9149 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 11:12:51.219271    9149 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 11:12:51.222356    9149 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 11:12:51.403198    9149 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 11:12:51.612736    9149 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 11:12:51.613209    9149 kubeadm.go:309] 
	I0327 11:12:51.613249    9149 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 11:12:51.613252    9149 kubeadm.go:309] 
	I0327 11:12:51.613315    9149 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 11:12:51.613321    9149 kubeadm.go:309] 
	I0327 11:12:51.613350    9149 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 11:12:51.613392    9149 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 11:12:51.613423    9149 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 11:12:51.613426    9149 kubeadm.go:309] 
	I0327 11:12:51.613453    9149 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 11:12:51.613461    9149 kubeadm.go:309] 
	I0327 11:12:51.613484    9149 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 11:12:51.613487    9149 kubeadm.go:309] 
	I0327 11:12:51.613513    9149 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 11:12:51.613570    9149 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 11:12:51.613610    9149 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 11:12:51.613616    9149 kubeadm.go:309] 
	I0327 11:12:51.613662    9149 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 11:12:51.613707    9149 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 11:12:51.613713    9149 kubeadm.go:309] 
	I0327 11:12:51.613755    9149 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 644p81.qjqgd0r862w3b1j6 \
	I0327 11:12:51.613820    9149 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:45ef07f6bc58c462e2dd74e42d978dd37f899f74c2cb02d53ec43ace0e53a157 \
	I0327 11:12:51.613832    9149 kubeadm.go:309] 	--control-plane 
	I0327 11:12:51.613834    9149 kubeadm.go:309] 
	I0327 11:12:51.613879    9149 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 11:12:51.613883    9149 kubeadm.go:309] 
	I0327 11:12:51.613931    9149 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 644p81.qjqgd0r862w3b1j6 \
	I0327 11:12:51.613997    9149 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:45ef07f6bc58c462e2dd74e42d978dd37f899f74c2cb02d53ec43ace0e53a157 
	I0327 11:12:51.614060    9149 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 11:12:51.614067    9149 cni.go:84] Creating CNI manager for ""
	I0327 11:12:51.614075    9149 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:12:51.617914    9149 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 11:12:51.623103    9149 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 11:12:51.626186    9149 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 11:12:51.632123    9149 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 11:12:51.632176    9149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 11:12:51.632252    9149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-599000 minikube.k8s.io/updated_at=2024_03_27T11_12_51_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28 minikube.k8s.io/name=running-upgrade-599000 minikube.k8s.io/primary=true
	I0327 11:12:51.680424    9149 ops.go:34] apiserver oom_adj: -16
	I0327 11:12:51.680480    9149 kubeadm.go:1107] duration metric: took 48.354125ms to wait for elevateKubeSystemPrivileges
	W0327 11:12:51.680500    9149 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 11:12:51.680503    9149 kubeadm.go:393] duration metric: took 4m11.735837958s to StartCluster
	I0327 11:12:51.680513    9149 settings.go:142] acquiring lock: {Name:mkcc43d09d3f5881bfa353312da863c5f99807b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:12:51.680676    9149 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:12:51.681036    9149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/kubeconfig: {Name:mkbb2a293ed6f55faa985236f49efd479958575e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:12:51.681234    9149 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:12:51.685883    9149 out.go:177] * Verifying Kubernetes components...
	I0327 11:12:51.681257    9149 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 11:12:51.681428    9149 config.go:182] Loaded profile config "running-upgrade-599000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:12:51.693935    9149 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-599000"
	I0327 11:12:51.693940    9149 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-599000"
	I0327 11:12:51.693948    9149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:12:51.693962    9149 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-599000"
	W0327 11:12:51.693966    9149 addons.go:243] addon storage-provisioner should already be in state true
	I0327 11:12:51.693976    9149 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-599000"
	I0327 11:12:51.694000    9149 host.go:66] Checking if "running-upgrade-599000" exists ...
	I0327 11:12:51.695106    9149 kapi.go:59] client config for running-upgrade-599000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/client.key", CAFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10432eb90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 11:12:51.695675    9149 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-599000"
	W0327 11:12:51.695679    9149 addons.go:243] addon default-storageclass should already be in state true
	I0327 11:12:51.695686    9149 host.go:66] Checking if "running-upgrade-599000" exists ...
	I0327 11:12:51.700909    9149 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:12:51.703912    9149 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 11:12:51.703917    9149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 11:12:51.703924    9149 sshutil.go:53] new ssh client: &{IP:localhost Port:51115 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/running-upgrade-599000/id_rsa Username:docker}
	I0327 11:12:51.704662    9149 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 11:12:51.704667    9149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 11:12:51.704671    9149 sshutil.go:53] new ssh client: &{IP:localhost Port:51115 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/running-upgrade-599000/id_rsa Username:docker}
	I0327 11:12:51.782993    9149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 11:12:51.787726    9149 api_server.go:52] waiting for apiserver process to appear ...
	I0327 11:12:51.787781    9149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:12:51.792726    9149 api_server.go:72] duration metric: took 111.479125ms to wait for apiserver process to appear ...
	I0327 11:12:51.792738    9149 api_server.go:88] waiting for apiserver healthz status ...
	I0327 11:12:51.792746    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:51.857171    9149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 11:12:51.858459    9149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 11:12:56.794871    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:56.794915    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:01.795261    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:01.795284    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:06.795677    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:06.795711    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:11.795956    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:11.796028    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:16.796568    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:16.796627    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:21.797334    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:21.797376    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 11:13:22.193658    9149 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 11:13:22.197961    9149 out.go:177] * Enabled addons: storage-provisioner
	I0327 11:13:22.210969    9149 addons.go:505] duration metric: took 30.530524167s for enable addons: enabled=[storage-provisioner]
	I0327 11:13:26.797840    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:26.797893    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:31.799039    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:31.799069    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:36.799668    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:36.799688    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:41.801365    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:41.801421    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:46.803577    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:46.803614    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:51.805787    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:51.805912    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:51.833275    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:13:51.833348    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:51.843728    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:13:51.843790    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:51.855232    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:13:51.855296    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:51.865988    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:13:51.866051    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:51.879830    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:13:51.879896    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:51.890715    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:13:51.890769    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:51.900838    9149 logs.go:276] 0 containers: []
	W0327 11:13:51.900848    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:51.900900    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:51.916633    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:13:51.916646    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:13:51.916652    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:13:51.928188    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:13:51.928199    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:13:51.940573    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:13:51.940587    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:13:51.952325    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:51.952335    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:51.977019    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:13:51.977027    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:13:51.991207    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:51.991217    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:51.995501    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:51.995507    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:52.031325    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:13:52.031338    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:13:52.045373    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:13:52.045383    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:13:52.056660    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:13:52.056671    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:13:52.071593    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:13:52.071602    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:13:52.088924    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:13:52.088935    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:52.100772    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:52.100785    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:54.641238    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:59.643447    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:59.643619    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:59.655476    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:13:59.655545    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:59.667844    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:13:59.667908    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:59.678127    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:13:59.678187    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:59.692830    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:13:59.692888    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:59.702871    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:13:59.702929    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:59.713611    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:13:59.713669    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:59.726367    9149 logs.go:276] 0 containers: []
	W0327 11:13:59.726377    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:59.726426    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:59.736981    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:13:59.736998    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:59.737003    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:59.773421    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:59.773431    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:59.777694    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:13:59.777703    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:13:59.793892    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:13:59.793904    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:13:59.807548    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:13:59.807562    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:13:59.819488    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:13:59.819498    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:13:59.831167    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:13:59.831177    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:13:59.848695    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:59.848706    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:59.873820    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:59.873830    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:59.908165    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:13:59.908176    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:13:59.920506    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:13:59.920517    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:13:59.935584    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:13:59.935593    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:13:59.947292    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:13:59.947316    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:02.460158    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:07.461125    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:07.461510    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:07.501158    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:07.501288    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:07.528084    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:07.528172    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:07.542204    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:07.542275    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:07.553967    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:07.554029    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:07.564972    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:07.565047    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:07.576089    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:07.576157    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:07.586925    9149 logs.go:276] 0 containers: []
	W0327 11:14:07.586935    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:07.586990    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:07.597950    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:07.597967    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:07.597972    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:07.609731    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:07.609742    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:07.626068    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:07.626084    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:07.639920    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:07.639933    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:07.660253    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:07.660263    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:07.672191    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:07.672202    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:07.710779    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:07.710788    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:07.725741    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:07.725751    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:07.741138    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:07.741149    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:07.757392    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:07.757402    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:07.780666    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:07.780674    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:07.784912    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:07.784919    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:07.820714    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:07.820724    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:10.337855    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:15.340185    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:15.340536    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:15.375079    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:15.375227    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:15.395843    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:15.395921    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:15.410747    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:15.410822    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:15.422991    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:15.423051    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:15.434683    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:15.434747    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:15.446065    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:15.446130    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:15.457144    9149 logs.go:276] 0 containers: []
	W0327 11:14:15.457154    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:15.457201    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:15.468517    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:15.468533    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:15.468538    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:15.484481    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:15.484492    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:15.497841    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:15.497852    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:15.516316    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:15.516326    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:15.528201    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:15.528211    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:15.552812    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:15.552819    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:15.590803    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:15.590810    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:15.649123    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:15.649136    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:15.665419    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:15.665429    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:15.680489    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:15.680501    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:15.692949    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:15.692959    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:15.697136    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:15.697143    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:15.715586    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:15.715598    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:18.233034    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:23.235485    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:23.235897    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:23.272193    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:23.272317    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:23.291450    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:23.291532    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:23.305938    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:23.306011    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:23.318362    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:23.318428    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:23.329111    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:23.329175    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:23.340144    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:23.340208    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:23.351561    9149 logs.go:276] 0 containers: []
	W0327 11:14:23.351572    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:23.351628    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:23.362870    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:23.362886    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:23.362891    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:23.398101    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:23.398116    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:23.412551    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:23.412561    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:23.425186    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:23.425200    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:23.437530    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:23.437545    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:23.450118    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:23.450127    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:23.461557    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:23.461571    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:23.497389    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:23.497405    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:23.501916    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:23.501922    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:23.517158    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:23.517172    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:23.528493    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:23.528502    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:23.543678    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:23.543692    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:23.562003    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:23.562013    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:26.088214    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:31.090708    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:31.090945    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:31.110908    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:31.110994    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:31.125244    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:31.125310    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:31.137430    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:31.137489    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:31.153182    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:31.153240    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:31.164897    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:31.164964    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:31.177168    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:31.177248    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:31.187602    9149 logs.go:276] 0 containers: []
	W0327 11:14:31.187615    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:31.187673    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:31.198281    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:31.198293    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:31.198299    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:31.235226    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:31.235237    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:31.239992    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:31.239998    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:31.274958    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:31.274969    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:31.287035    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:31.287047    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:31.299528    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:31.299538    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:31.317697    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:31.317706    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:31.332822    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:31.332832    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:31.347610    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:31.347622    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:31.363361    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:31.363371    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:31.376192    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:31.376201    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:31.389357    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:31.389366    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:31.412911    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:31.412921    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:33.928168    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:38.930466    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:38.930708    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:38.951175    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:38.951263    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:38.965479    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:38.965542    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:38.977373    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:38.977435    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:38.988220    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:38.988285    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:38.998357    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:38.998418    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:39.012985    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:39.013042    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:39.022978    9149 logs.go:276] 0 containers: []
	W0327 11:14:39.022987    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:39.023037    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:39.033726    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:39.033742    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:39.033747    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:39.070637    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:39.070646    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:39.075364    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:39.075371    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:39.087939    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:39.087950    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:39.099966    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:39.099977    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:39.124530    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:39.124546    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:39.137215    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:39.137227    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:39.172459    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:39.172471    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:39.186656    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:39.186667    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:39.200347    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:39.200360    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:39.215099    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:39.215110    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:39.226710    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:39.226720    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:39.243879    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:39.243890    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:41.757872    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:46.760166    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:46.760325    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:46.771095    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:46.771171    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:46.782068    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:46.782145    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:46.792673    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:46.792742    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:46.803475    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:46.803545    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:46.814799    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:46.814865    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:46.825529    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:46.825586    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:46.836293    9149 logs.go:276] 0 containers: []
	W0327 11:14:46.836305    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:46.836375    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:46.846553    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:46.846567    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:46.846572    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:46.861474    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:46.861484    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:46.872850    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:46.872859    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:46.884468    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:46.884477    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:46.895405    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:46.895414    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:46.910661    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:46.910671    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:46.915689    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:46.915698    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:46.953418    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:46.953431    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:46.967976    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:46.967986    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:46.979097    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:46.979108    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:46.990846    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:46.990857    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:47.008583    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:47.008593    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:47.033430    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:47.033438    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:49.572392    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:54.574741    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:54.574904    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:54.593883    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:54.593987    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:54.607928    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:54.608004    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:54.619323    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:54.619386    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:54.629418    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:54.629485    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:54.642962    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:54.643027    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:54.658108    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:54.658177    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:54.667702    9149 logs.go:276] 0 containers: []
	W0327 11:14:54.667714    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:54.667773    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:54.678253    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:54.678270    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:54.678276    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:54.693139    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:54.693149    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:54.704917    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:54.704926    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:54.722168    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:54.722181    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:54.734534    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:54.734546    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:54.747063    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:54.747082    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:54.751684    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:54.751696    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:54.785638    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:54.785649    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:54.800395    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:54.800405    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:54.814618    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:54.814629    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:54.826581    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:54.826596    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:54.838351    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:54.838364    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:54.862311    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:54.862320    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:57.403056    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:02.405487    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:02.405679    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:02.425794    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:02.425885    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:02.439956    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:02.440034    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:02.451722    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:02.451794    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:02.461876    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:02.461933    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:02.472659    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:02.472732    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:02.483365    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:02.483426    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:02.493010    9149 logs.go:276] 0 containers: []
	W0327 11:15:02.493022    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:02.493079    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:02.503478    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:02.503493    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:02.503499    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:02.515024    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:02.515037    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:02.529317    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:02.529328    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:02.545211    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:02.545222    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:02.560027    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:02.560036    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:02.577668    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:02.577677    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:02.589010    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:02.589023    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:02.600696    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:02.600706    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:02.623872    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:02.623879    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:02.659940    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:02.659950    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:02.664196    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:02.664202    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:02.697784    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:02.697797    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:02.712173    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:02.712183    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:05.225701    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:10.228029    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:10.228260    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:10.245408    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:10.245492    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:10.259465    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:10.259529    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:10.270961    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:10.271023    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:10.285052    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:10.285112    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:10.295796    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:10.295853    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:10.306856    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:10.306915    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:10.317264    9149 logs.go:276] 0 containers: []
	W0327 11:15:10.317274    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:10.317323    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:10.328362    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:10.328379    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:10.328384    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:10.365015    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:10.365025    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:10.388118    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:10.388125    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:10.406525    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:10.406535    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:10.422285    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:10.422295    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:10.433837    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:10.433846    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:10.445810    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:10.445820    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:10.458064    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:10.458074    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:10.475199    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:10.475209    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:10.515313    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:10.515325    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:10.530288    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:10.530299    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:10.541249    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:10.541259    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:10.552686    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:10.552695    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:10.557516    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:10.557523    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:10.571806    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:10.571814    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:13.087909    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:18.090455    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:18.090721    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:18.118380    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:18.118504    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:18.135461    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:18.135552    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:18.149508    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:18.149575    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:18.160739    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:18.160803    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:18.170576    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:18.170643    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:18.181017    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:18.181080    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:18.191323    9149 logs.go:276] 0 containers: []
	W0327 11:15:18.191336    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:18.191391    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:18.201969    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:18.201989    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:18.201994    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:18.216680    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:18.216691    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:18.240632    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:18.240641    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:18.255438    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:18.255447    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:18.266612    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:18.266625    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:18.278201    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:18.278217    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:18.290212    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:18.290223    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:18.304373    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:18.304383    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:18.342656    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:18.342669    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:18.347006    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:18.347011    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:18.381183    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:18.381196    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:18.398272    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:18.398283    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:18.413448    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:18.413460    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:18.430546    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:18.430555    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:18.445039    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:18.445048    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:20.961633    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:25.964162    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:25.964514    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:25.995398    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:25.995522    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:26.014199    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:26.014289    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:26.028277    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:26.028348    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:26.044162    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:26.044235    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:26.054113    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:26.054174    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:26.068089    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:26.068150    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:26.077802    9149 logs.go:276] 0 containers: []
	W0327 11:15:26.077811    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:26.077859    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:26.088579    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:26.088592    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:26.088597    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:26.115014    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:26.115024    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:26.126188    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:26.126199    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:26.130548    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:26.130555    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:26.142075    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:26.142086    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:26.156693    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:26.156703    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:26.170600    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:26.170608    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:26.182265    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:26.182275    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:26.193541    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:26.193551    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:26.227744    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:26.227755    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:26.239376    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:26.239386    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:26.251032    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:26.251043    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:26.269842    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:26.269852    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:26.281842    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:26.281851    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:26.318696    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:26.318707    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:28.835311    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:33.837635    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:33.838050    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:33.867953    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:33.868067    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:33.886572    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:33.886649    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:33.900878    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:33.900954    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:33.914071    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:33.914138    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:33.925209    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:33.925269    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:33.935824    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:33.935892    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:33.946490    9149 logs.go:276] 0 containers: []
	W0327 11:15:33.946500    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:33.946548    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:33.962821    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:33.962840    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:33.962845    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:33.985939    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:33.985946    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:34.019620    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:34.019631    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:34.033855    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:34.033868    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:34.048060    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:34.048072    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:34.059234    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:34.059244    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:34.063684    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:34.063690    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:34.077156    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:34.077166    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:34.092750    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:34.092760    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:34.104920    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:34.104930    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:34.116325    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:34.116336    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:34.128552    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:34.128562    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:34.145825    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:34.145836    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:34.157427    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:34.157438    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:34.193633    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:34.193646    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:36.709460    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:41.711671    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:41.711813    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:41.723382    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:41.723446    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:41.734013    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:41.734092    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:41.744712    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:41.744776    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:41.755336    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:41.755398    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:41.765533    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:41.765596    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:41.776507    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:41.776572    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:41.787100    9149 logs.go:276] 0 containers: []
	W0327 11:15:41.787110    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:41.787166    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:41.797918    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:41.797934    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:41.797942    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:41.809864    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:41.809876    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:41.846002    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:41.846014    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:41.857763    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:41.857773    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:41.881790    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:41.881802    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:41.920516    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:41.920532    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:41.935261    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:41.935271    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:41.946719    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:41.946730    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:41.951038    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:41.951044    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:41.969496    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:41.969506    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:41.980855    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:41.980865    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:41.992726    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:41.992737    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:42.014659    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:42.014669    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:42.026427    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:42.026438    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:42.038448    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:42.038458    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:44.555025    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:49.557250    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:49.557436    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:49.573897    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:49.573973    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:49.588229    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:49.588295    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:49.599819    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:49.599893    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:49.611039    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:49.611109    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:49.622147    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:49.622212    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:49.632908    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:49.632971    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:49.644236    9149 logs.go:276] 0 containers: []
	W0327 11:15:49.644247    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:49.644298    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:49.654857    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:49.654874    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:49.654879    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:49.669075    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:49.669084    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:49.680891    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:49.680900    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:49.695763    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:49.695778    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:49.719027    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:49.719036    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:49.755395    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:49.755406    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:49.769709    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:49.769719    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:49.781262    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:49.781272    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:49.793999    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:49.794009    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:49.810782    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:49.810797    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:49.825933    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:49.825943    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:49.863949    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:49.863957    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:49.868718    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:49.868724    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:49.880948    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:49.880957    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:49.899260    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:49.899274    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:52.415979    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:57.418389    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:57.418762    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:57.447553    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:57.447679    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:57.466603    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:57.466700    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:57.480135    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:57.480206    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:57.494460    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:57.494528    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:57.504918    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:57.504986    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:57.515762    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:57.515818    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:57.530097    9149 logs.go:276] 0 containers: []
	W0327 11:15:57.530107    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:57.530154    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:57.540289    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:57.540309    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:57.540313    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:57.583226    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:57.583237    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:57.595756    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:57.595768    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:57.611466    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:57.611481    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:57.626359    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:57.626370    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:57.647263    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:57.647274    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:57.663221    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:57.663235    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:57.680256    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:57.680271    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:57.691917    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:57.691927    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:57.703153    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:57.703163    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:57.708346    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:57.708357    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:57.722994    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:57.723005    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:57.760860    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:57.760869    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:57.778943    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:57.778952    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:57.804580    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:57.804592    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:00.316385    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:05.318565    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:05.318729    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:05.337457    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:05.337539    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:05.351601    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:05.351677    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:05.372034    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:05.372110    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:05.393502    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:05.393578    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:05.409801    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:05.409877    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:05.422102    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:05.422175    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:05.434106    9149 logs.go:276] 0 containers: []
	W0327 11:16:05.434117    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:05.434180    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:05.446757    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:05.446774    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:05.446779    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:05.462734    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:05.462747    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:05.475451    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:05.475465    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:05.515742    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:05.515761    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:05.520405    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:05.520414    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:05.534268    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:05.534280    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:05.545733    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:05.545744    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:05.557611    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:05.557625    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:05.570439    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:05.570451    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:05.584914    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:05.584928    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:05.607040    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:05.607051    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:05.619364    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:05.619377    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:05.656594    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:05.656608    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:05.668420    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:05.668432    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:05.680355    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:05.680367    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:08.207272    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:13.208633    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:13.208944    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:13.242069    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:13.242192    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:13.269473    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:13.269552    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:13.282666    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:13.282740    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:13.294406    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:13.294480    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:13.305636    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:13.305703    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:13.316281    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:13.316346    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:13.326907    9149 logs.go:276] 0 containers: []
	W0327 11:16:13.326918    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:13.326986    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:13.337402    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:13.337417    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:13.337422    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:13.374109    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:13.374118    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:13.385822    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:13.385833    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:13.400980    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:13.400990    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:13.413003    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:13.413016    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:13.427984    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:13.427994    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:13.439188    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:13.439200    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:13.451834    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:13.451844    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:13.469100    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:13.469110    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:13.480555    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:13.480565    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:13.499045    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:13.499055    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:13.523546    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:13.523555    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:13.561500    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:13.561511    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:13.565694    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:13.565702    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:13.580437    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:13.580448    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:16.102350    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:21.104606    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:21.104779    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:21.119980    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:21.120052    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:21.131682    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:21.131745    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:21.142603    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:21.142674    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:21.160242    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:21.160332    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:21.172106    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:21.172198    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:21.184402    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:21.184489    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:21.196937    9149 logs.go:276] 0 containers: []
	W0327 11:16:21.196947    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:21.197030    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:21.208028    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:21.208047    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:21.208053    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:21.247463    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:21.247483    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:21.259881    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:21.259892    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:21.273020    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:21.273032    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:21.293953    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:21.293976    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:21.309882    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:21.309895    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:21.325127    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:21.325139    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:21.340644    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:21.340654    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:21.353230    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:21.353242    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:21.364983    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:21.364994    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:21.402718    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:21.402729    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:21.407511    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:21.407519    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:21.422301    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:21.422319    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:21.434349    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:21.434359    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:21.446254    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:21.446265    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:23.972298    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:28.974692    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:28.974809    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:28.988466    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:28.988528    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:28.999456    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:28.999519    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:29.010094    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:29.010155    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:29.020754    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:29.020816    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:29.031484    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:29.031546    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:29.042478    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:29.042541    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:29.053305    9149 logs.go:276] 0 containers: []
	W0327 11:16:29.053315    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:29.053365    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:29.064042    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:29.064058    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:29.064063    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:29.080343    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:29.080354    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:29.098389    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:29.098401    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:29.109973    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:29.109986    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:29.122683    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:29.122696    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:29.159264    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:29.159278    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:29.173736    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:29.173748    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:29.185041    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:29.185052    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:29.221415    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:29.221423    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:29.233923    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:29.233933    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:29.246358    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:29.246369    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:29.258887    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:29.258898    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:29.270681    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:29.270691    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:29.285119    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:29.285131    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:29.309059    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:29.309068    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:31.815363    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:36.817598    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:36.817812    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:36.835694    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:36.835781    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:36.848159    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:36.848230    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:36.859520    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:36.859591    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:36.869976    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:36.870041    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:36.880431    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:36.880500    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:36.891166    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:36.891231    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:36.901459    9149 logs.go:276] 0 containers: []
	W0327 11:16:36.901472    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:36.901528    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:36.914303    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:36.914320    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:36.914324    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:36.952996    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:36.953005    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:36.967124    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:36.967137    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:36.982101    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:36.982115    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:36.996652    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:36.996664    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:37.008154    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:37.008164    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:37.019627    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:37.019637    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:37.031210    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:37.031220    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:37.035616    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:37.035624    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:37.046721    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:37.046735    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:37.070618    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:37.070625    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:37.105183    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:37.105193    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:37.116687    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:37.116698    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:37.132141    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:37.132151    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:37.149327    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:37.149341    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:39.664967    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:44.667464    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:44.667635    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:44.679003    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:44.679074    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:44.689815    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:44.689876    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:44.701832    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:44.701898    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:44.712351    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:44.712415    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:44.722675    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:44.722728    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:44.733453    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:44.733517    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:44.743452    9149 logs.go:276] 0 containers: []
	W0327 11:16:44.743466    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:44.743544    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:44.754149    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:44.754167    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:44.754172    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:44.767294    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:44.767304    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:44.781435    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:44.781446    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:44.793300    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:44.793310    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:44.805162    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:44.805174    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:44.842032    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:44.842042    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:44.859380    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:44.859392    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:44.871556    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:44.871566    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:44.908292    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:44.908303    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:44.912550    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:44.912556    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:44.924362    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:44.924373    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:44.939301    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:44.939310    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:44.951453    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:44.951463    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:44.975590    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:44.975598    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:44.992689    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:44.992699    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:47.506366    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:52.508643    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:52.513407    9149 out.go:177] 
	W0327 11:16:52.516263    9149 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0327 11:16:52.516276    9149 out.go:239] * 
	* 
	W0327 11:16:52.517095    9149 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:16:52.526203    9149 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-599000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-27 11:16:52.60677 -0700 PDT m=+1528.419821293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-599000 -n running-upgrade-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-599000 -n running-upgrade-599000: exit status 2 (15.770477833s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-599000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p force-systemd-flag-743000          | force-systemd-flag-743000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-env-226000              | force-systemd-env-226000  | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-226000           | force-systemd-env-226000  | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT | 27 Mar 24 11:06 PDT |
	| start   | -p docker-flags-486000                | docker-flags-486000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT |                     |
	|         | --cache-images=false                  |                           |         |                |                     |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=false                          |                           |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |                |                     |                     |
	|         | --docker-opt=debug                    |                           |         |                |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-743000             | force-systemd-flag-743000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-743000          | force-systemd-flag-743000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT | 27 Mar 24 11:06 PDT |
	| start   | -p cert-expiration-739000             | cert-expiration-739000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | docker-flags-486000 ssh               | docker-flags-486000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=Environment                |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| ssh     | docker-flags-486000 ssh               | docker-flags-486000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=ExecStart                  |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| delete  | -p docker-flags-486000                | docker-flags-486000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT | 27 Mar 24 11:06 PDT |
	| start   | -p cert-options-567000                | cert-options-567000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:06 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | cert-options-567000 ssh               | cert-options-567000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:07 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-567000 -- sudo        | cert-options-567000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:07 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-567000                | cert-options-567000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:07 PDT | 27 Mar 24 11:07 PDT |
	| start   | -p running-upgrade-599000             | minikube                  | jenkins | v1.26.0        | 27 Mar 24 11:07 PDT | 27 Mar 24 11:08 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| start   | -p running-upgrade-599000             | running-upgrade-599000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:08 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| start   | -p cert-expiration-739000             | cert-expiration-739000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:09 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-739000             | cert-expiration-739000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:10 PDT | 27 Mar 24 11:10 PDT |
	| start   | -p kubernetes-upgrade-600000          | kubernetes-upgrade-600000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:10 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-600000          | kubernetes-upgrade-600000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:10 PDT | 27 Mar 24 11:10 PDT |
	| start   | -p kubernetes-upgrade-600000          | kubernetes-upgrade-600000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:10 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0   |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-600000          | kubernetes-upgrade-600000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:10 PDT | 27 Mar 24 11:10 PDT |
	| start   | -p stopped-upgrade-042000             | minikube                  | jenkins | v1.26.0        | 27 Mar 24 11:10 PDT | 27 Mar 24 11:11 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-042000 stop           | minikube                  | jenkins | v1.26.0        | 27 Mar 24 11:11 PDT | 27 Mar 24 11:11 PDT |
	| start   | -p stopped-upgrade-042000             | stopped-upgrade-042000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 11:11 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 11:11:26
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 11:11:26.312392    9387 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:11:26.312559    9387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:11:26.312563    9387 out.go:304] Setting ErrFile to fd 2...
	I0327 11:11:26.312566    9387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:11:26.312709    9387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:11:26.313834    9387 out.go:298] Setting JSON to false
	I0327 11:11:26.333114    9387 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6059,"bootTime":1711557027,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:11:26.333176    9387 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:11:26.335604    9387 out.go:177] * [stopped-upgrade-042000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:11:26.343793    9387 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:11:26.343830    9387 notify.go:220] Checking for updates...
	I0327 11:11:26.347733    9387 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:11:26.351728    9387 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:11:26.355533    9387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:11:26.358764    9387 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:11:26.361845    9387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:11:26.365181    9387 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:11:26.368687    9387 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 11:11:26.371769    9387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:11:26.375745    9387 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:11:26.382731    9387 start.go:297] selected driver: qemu2
	I0327 11:11:26.382737    9387 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51357 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 11:11:26.382810    9387 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:11:26.385525    9387 cni.go:84] Creating CNI manager for ""
	I0327 11:11:26.385544    9387 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:11:26.385573    9387 start.go:340] cluster config:
	{Name:stopped-upgrade-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51357 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 11:11:26.385631    9387 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:11:26.394715    9387 out.go:177] * Starting "stopped-upgrade-042000" primary control-plane node in "stopped-upgrade-042000" cluster
	I0327 11:11:26.398728    9387 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 11:11:26.398743    9387 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0327 11:11:26.398748    9387 cache.go:56] Caching tarball of preloaded images
	I0327 11:11:26.398802    9387 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:11:26.398808    9387 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0327 11:11:26.398862    9387 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/config.json ...
	I0327 11:11:26.399333    9387 start.go:360] acquireMachinesLock for stopped-upgrade-042000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:11:26.399368    9387 start.go:364] duration metric: took 28.041µs to acquireMachinesLock for "stopped-upgrade-042000"
	I0327 11:11:26.399378    9387 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:11:26.399381    9387 fix.go:54] fixHost starting: 
	I0327 11:11:26.399489    9387 fix.go:112] recreateIfNeeded on stopped-upgrade-042000: state=Stopped err=<nil>
	W0327 11:11:26.399498    9387 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:11:26.403759    9387 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-042000" ...
	I0327 11:11:24.997890    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:24.998292    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:25.036439    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:25.036586    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:25.061172    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:25.061262    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:25.075350    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:25.075425    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:25.090850    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:25.090916    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:25.102265    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:25.102326    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:25.113316    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:25.113390    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:25.123276    9149 logs.go:276] 0 containers: []
	W0327 11:11:25.123290    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:25.123348    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:25.134026    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:25.134044    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:25.134049    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:25.138704    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:25.138714    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:25.174620    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:25.174632    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:25.190116    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:25.190126    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:25.201916    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:25.201928    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:25.238969    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:25.238979    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:25.262139    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:25.262145    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:25.274586    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:25.274598    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:25.286881    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:25.286890    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:25.303714    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:25.303724    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:25.317593    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:25.317607    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:25.355635    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:25.355648    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:25.370203    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:25.370216    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:25.387518    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:25.387528    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:25.399301    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:25.399311    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:25.410896    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:25.410906    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:26.411749    9387 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51322-:22,hostfwd=tcp::51323-:2376,hostname=stopped-upgrade-042000 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/disk.qcow2
	I0327 11:11:26.458932    9387 main.go:141] libmachine: STDOUT: 
	I0327 11:11:26.458968    9387 main.go:141] libmachine: STDERR: 
	I0327 11:11:26.458973    9387 main.go:141] libmachine: Waiting for VM to start (ssh -p 51322 docker@127.0.0.1)...
	I0327 11:11:27.924843    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:32.927496    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:32.927662    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:32.940032    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:32.940101    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:32.950715    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:32.950784    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:32.961769    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:32.961831    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:32.972307    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:32.972375    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:32.982554    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:32.982617    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:32.993278    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:32.993336    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:33.003241    9149 logs.go:276] 0 containers: []
	W0327 11:11:33.003249    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:33.003297    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:33.014034    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:33.014048    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:33.014053    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:33.051927    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:33.051942    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:33.069662    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:33.069672    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:33.082157    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:33.082169    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:33.094287    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:33.094297    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:33.098856    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:33.098863    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:33.133689    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:33.133700    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:33.153914    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:33.153923    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:33.165522    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:33.165532    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:33.177328    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:33.177340    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:33.196645    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:33.196653    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:33.220064    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:33.220074    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:33.235917    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:33.235927    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:33.247538    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:33.247552    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:33.261798    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:33.261811    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:33.277034    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:33.277048    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:35.803427    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:40.805642    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:40.805735    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:40.817128    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:40.817194    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:40.828674    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:40.828742    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:40.839442    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:40.839503    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:40.850754    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:40.850810    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:40.871388    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:40.871447    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:40.887397    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:40.887455    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:40.902803    9149 logs.go:276] 0 containers: []
	W0327 11:11:40.902819    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:40.902879    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:40.913592    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:40.913609    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:40.913614    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:40.938401    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:40.938412    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:40.952749    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:40.952759    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:40.965281    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:40.965290    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:40.989012    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:40.989021    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:41.001826    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:41.001835    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:41.006391    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:41.006400    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:41.017744    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:41.017755    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:41.035450    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:41.035461    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:41.083411    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:41.083421    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:41.097900    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:41.097909    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:41.113042    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:41.113052    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:41.129155    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:41.129164    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:41.144202    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:41.144212    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:41.180988    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:41.180996    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:41.192609    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:41.192619    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:43.706088    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:46.465016    9387 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/config.json ...
	I0327 11:11:46.465581    9387 machine.go:94] provisionDockerMachine start ...
	I0327 11:11:46.465723    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.465998    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.466008    9387 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 11:11:46.541662    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0327 11:11:46.541688    9387 buildroot.go:166] provisioning hostname "stopped-upgrade-042000"
	I0327 11:11:46.541756    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.541933    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.541943    9387 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-042000 && echo "stopped-upgrade-042000" | sudo tee /etc/hostname
	I0327 11:11:46.623936    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-042000
	
	I0327 11:11:46.624010    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.624197    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.624212    9387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-042000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-042000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-042000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 11:11:46.694251    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 11:11:46.694264    9387 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18517-6013/.minikube CaCertPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18517-6013/.minikube}
	I0327 11:11:46.694277    9387 buildroot.go:174] setting up certificates
	I0327 11:11:46.694282    9387 provision.go:84] configureAuth start
	I0327 11:11:46.694289    9387 provision.go:143] copyHostCerts
	I0327 11:11:46.694366    9387 exec_runner.go:144] found /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.pem, removing ...
	I0327 11:11:46.694373    9387 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.pem
	I0327 11:11:46.694474    9387 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.pem (1082 bytes)
	I0327 11:11:46.694663    9387 exec_runner.go:144] found /Users/jenkins/minikube-integration/18517-6013/.minikube/cert.pem, removing ...
	I0327 11:11:46.694667    9387 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18517-6013/.minikube/cert.pem
	I0327 11:11:46.694717    9387 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18517-6013/.minikube/cert.pem (1123 bytes)
	I0327 11:11:46.694831    9387 exec_runner.go:144] found /Users/jenkins/minikube-integration/18517-6013/.minikube/key.pem, removing ...
	I0327 11:11:46.694834    9387 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18517-6013/.minikube/key.pem
	I0327 11:11:46.694888    9387 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18517-6013/.minikube/key.pem (1679 bytes)
	I0327 11:11:46.694991    9387 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-042000 san=[127.0.0.1 localhost minikube stopped-upgrade-042000]
	I0327 11:11:46.817660    9387 provision.go:177] copyRemoteCerts
	I0327 11:11:46.817704    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 11:11:46.817712    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:11:46.853899    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0327 11:11:46.861028    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 11:11:46.867328    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0327 11:11:46.874492    9387 provision.go:87] duration metric: took 180.201916ms to configureAuth
	I0327 11:11:46.874501    9387 buildroot.go:189] setting minikube options for container-runtime
	I0327 11:11:46.874623    9387 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:11:46.874658    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.874742    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.874748    9387 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 11:11:46.940485    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 11:11:46.940496    9387 buildroot.go:70] root file system type: tmpfs
	I0327 11:11:46.940546    9387 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 11:11:46.940595    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.940694    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.940728    9387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 11:11:47.008186    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 11:11:47.008241    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:47.008355    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:47.008363    9387 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 11:11:47.381051    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0327 11:11:47.381064    9387 machine.go:97] duration metric: took 915.47975ms to provisionDockerMachine
	I0327 11:11:47.381071    9387 start.go:293] postStartSetup for "stopped-upgrade-042000" (driver="qemu2")
	I0327 11:11:47.381077    9387 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 11:11:47.381140    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 11:11:47.381159    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:11:47.414666    9387 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 11:11:47.415955    9387 info.go:137] Remote host: Buildroot 2021.02.12
	I0327 11:11:47.415962    9387 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18517-6013/.minikube/addons for local assets ...
	I0327 11:11:47.416039    9387 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18517-6013/.minikube/files for local assets ...
	I0327 11:11:47.416147    9387 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem -> 64762.pem in /etc/ssl/certs
	I0327 11:11:47.416275    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 11:11:47.419013    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem --> /etc/ssl/certs/64762.pem (1708 bytes)
	I0327 11:11:47.426123    9387 start.go:296] duration metric: took 45.04775ms for postStartSetup
	I0327 11:11:47.426138    9387 fix.go:56] duration metric: took 21.026892958s for fixHost
	I0327 11:11:47.426172    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:47.426307    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:47.426314    9387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 11:11:47.493073    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711563107.933312629
	
	I0327 11:11:47.493083    9387 fix.go:216] guest clock: 1711563107.933312629
	I0327 11:11:47.493087    9387 fix.go:229] Guest: 2024-03-27 11:11:47.933312629 -0700 PDT Remote: 2024-03-27 11:11:47.42614 -0700 PDT m=+21.145543292 (delta=507.172629ms)
	I0327 11:11:47.493103    9387 fix.go:200] guest clock delta is within tolerance: 507.172629ms
	I0327 11:11:47.493105    9387 start.go:83] releasing machines lock for "stopped-upgrade-042000", held for 21.093868584s
	I0327 11:11:47.493171    9387 ssh_runner.go:195] Run: cat /version.json
	I0327 11:11:47.493183    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:11:47.493172    9387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 11:11:47.493204    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	W0327 11:11:47.493811    9387 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51322: connect: connection refused
	I0327 11:11:47.493832    9387 retry.go:31] will retry after 208.46166ms: dial tcp [::1]:51322: connect: connection refused
	W0327 11:11:47.528193    9387 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0327 11:11:47.528248    9387 ssh_runner.go:195] Run: systemctl --version
	I0327 11:11:47.530064    9387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 11:11:47.531690    9387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 11:11:47.531715    9387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0327 11:11:47.534749    9387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0327 11:11:47.539638    9387 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 11:11:47.539646    9387 start.go:494] detecting cgroup driver to use...
	I0327 11:11:47.539714    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 11:11:47.546871    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0327 11:11:47.550678    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 11:11:47.554040    9387 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 11:11:47.554069    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 11:11:47.557043    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 11:11:47.559889    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 11:11:47.563032    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 11:11:47.566419    9387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 11:11:47.569448    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 11:11:47.572327    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 11:11:47.575271    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 11:11:47.578739    9387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 11:11:47.581792    9387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 11:11:47.584414    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:47.669867    9387 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 11:11:47.676788    9387 start.go:494] detecting cgroup driver to use...
	I0327 11:11:47.676865    9387 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 11:11:47.682356    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 11:11:47.687230    9387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 11:11:47.693100    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 11:11:47.697625    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 11:11:47.701971    9387 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 11:11:47.770656    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 11:11:47.787040    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 11:11:47.792340    9387 ssh_runner.go:195] Run: which cri-dockerd
	I0327 11:11:47.793532    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 11:11:47.795899    9387 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 11:11:47.800728    9387 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 11:11:47.876941    9387 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 11:11:47.953308    9387 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 11:11:47.953393    9387 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 11:11:47.958784    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:48.034716    9387 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 11:11:48.149611    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 11:11:48.154198    9387 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0327 11:11:48.159333    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 11:11:48.164201    9387 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 11:11:48.246111    9387 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 11:11:48.320631    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:48.402596    9387 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 11:11:48.408698    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 11:11:48.412857    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:48.494205    9387 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 11:11:48.537031    9387 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 11:11:48.537106    9387 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 11:11:48.539061    9387 start.go:562] Will wait 60s for crictl version
	I0327 11:11:48.539117    9387 ssh_runner.go:195] Run: which crictl
	I0327 11:11:48.540483    9387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 11:11:48.555510    9387 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0327 11:11:48.555578    9387 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 11:11:48.576880    9387 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 11:11:48.596971    9387 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0327 11:11:48.597099    9387 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0327 11:11:48.598508    9387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 11:11:48.602448    9387 kubeadm.go:877] updating cluster {Name:stopped-upgrade-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51357 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-042000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0327 11:11:48.602499    9387 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 11:11:48.602537    9387 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 11:11:48.617371    9387 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 11:11:48.617379    9387 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 11:11:48.617425    9387 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 11:11:48.620427    9387 ssh_runner.go:195] Run: which lz4
	I0327 11:11:48.621618    9387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0327 11:11:48.622863    9387 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 11:11:48.622872    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0327 11:11:49.385638    9387 docker.go:649] duration metric: took 764.056875ms to copy over tarball
	I0327 11:11:49.385708    9387 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 11:11:50.571320    9387 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185606583s)
	I0327 11:11:50.571335    9387 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 11:11:50.587305    9387 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 11:11:50.590972    9387 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0327 11:11:50.595995    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:50.675476    9387 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 11:11:48.706893    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:48.706973    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:48.719351    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:48.719420    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:48.731340    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:48.731414    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:48.746303    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:48.746371    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:48.758339    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:48.758419    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:48.770169    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:48.770242    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:48.782572    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:48.782640    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:48.793919    9149 logs.go:276] 0 containers: []
	W0327 11:11:48.793930    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:48.793987    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:48.806115    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:48.806132    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:48.806138    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:48.832373    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:48.832387    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:48.846220    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:48.846234    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:48.866377    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:48.866389    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:48.903321    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:48.903338    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:48.918592    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:48.918604    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:48.933953    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:48.933969    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:48.946989    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:48.947001    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:48.960699    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:48.960712    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:48.973972    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:48.973985    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:48.998520    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:48.998533    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:49.003520    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:49.003533    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:49.018180    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:49.018192    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:49.036953    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:49.036967    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:49.053961    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:49.053972    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:49.092293    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:49.092305    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:51.607899    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:52.276635    9387 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.601153625s)
	I0327 11:11:52.276740    9387 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 11:11:52.289459    9387 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 11:11:52.289470    9387 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 11:11:52.289475    9387 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0327 11:11:52.296326    9387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:52.296431    9387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:52.296440    9387 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 11:11:52.296500    9387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:52.296511    9387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:52.296555    9387 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:52.296555    9387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:52.296604    9387 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:11:52.303677    9387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:52.305403    9387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:52.305417    9387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:52.305307    9387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:52.305486    9387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:52.305509    9387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:52.305545    9387 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 11:11:52.305602    9387 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0327 11:11:54.280866    9387 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 11:11:54.281295    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:11:54.308832    9387 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0327 11:11:54.308878    9387 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:11:54.308967    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:11:54.327259    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 11:11:54.327419    9387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0327 11:11:54.330766    9387 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0327 11:11:54.330789    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0327 11:11:54.360577    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:54.366710    9387 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0327 11:11:54.366722    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0327 11:11:54.373732    9387 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0327 11:11:54.373754    9387 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:54.373807    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:54.409078    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:54.426360    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0327 11:11:54.430932    9387 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0327 11:11:54.430983    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0327 11:11:54.431004    9387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0327 11:11:54.431032    9387 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:54.431073    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:54.440694    9387 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0327 11:11:54.440716    9387 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0327 11:11:54.440776    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0327 11:11:54.441853    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:54.442726    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0327 11:11:54.446641    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:54.450754    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:54.455517    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 11:11:54.455630    9387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0327 11:11:54.456147    9387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0327 11:11:54.456163    9387 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:54.456196    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:54.459668    9387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0327 11:11:54.459686    9387 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:54.459734    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:54.473024    9387 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0327 11:11:54.473037    9387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0327 11:11:54.473062    9387 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:54.473060    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0327 11:11:54.473111    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:54.476569    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0327 11:11:54.476619    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0327 11:11:54.482552    9387 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0327 11:11:54.482565    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0327 11:11:54.486256    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0327 11:11:54.513404    9387 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0327 11:11:54.852002    9387 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 11:11:54.852506    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:54.890869    9387 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0327 11:11:54.890912    9387 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:54.891024    9387 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:54.916524    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 11:11:54.916680    9387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0327 11:11:54.918723    9387 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0327 11:11:54.918739    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0327 11:11:54.950911    9387 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0327 11:11:54.950924    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0327 11:11:55.186108    9387 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0327 11:11:55.186145    9387 cache_images.go:92] duration metric: took 2.896680917s to LoadCachedImages
	W0327 11:11:55.186186    9387 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0327 11:11:55.186192    9387 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0327 11:11:55.186238    9387 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-042000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 11:11:55.186314    9387 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 11:11:55.199182    9387 cni.go:84] Creating CNI manager for ""
	I0327 11:11:55.199194    9387 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:11:55.199205    9387 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 11:11:55.199214    9387 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-042000 NodeName:stopped-upgrade-042000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 11:11:55.199284    9387 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-042000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 11:11:55.199343    9387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0327 11:11:55.202608    9387 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 11:11:55.202639    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 11:11:55.205805    9387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0327 11:11:55.210658    9387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 11:11:55.215264    9387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0327 11:11:55.220378    9387 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0327 11:11:55.221486    9387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 11:11:55.224819    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:55.306748    9387 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 11:11:55.312808    9387 certs.go:68] Setting up /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000 for IP: 10.0.2.15
	I0327 11:11:55.312817    9387 certs.go:194] generating shared ca certs ...
	I0327 11:11:55.312825    9387 certs.go:226] acquiring lock for ca certs: {Name:mk7b549dba732db56799e548b1d7e96b357996ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:11:55.312991    9387 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.key
	I0327 11:11:55.313731    9387 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/proxy-client-ca.key
	I0327 11:11:55.313737    9387 certs.go:256] generating profile certs ...
	I0327 11:11:55.313979    9387 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.key
	I0327 11:11:55.313997    9387 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key.bba745c7
	I0327 11:11:55.314008    9387 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt.bba745c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0327 11:11:55.399846    9387 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt.bba745c7 ...
	I0327 11:11:55.399862    9387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt.bba745c7: {Name:mk4d87191c8a2c07bfc811c69b28645fdd58add9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:11:55.400171    9387 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key.bba745c7 ...
	I0327 11:11:55.400179    9387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key.bba745c7: {Name:mk6de3173a0fb4042d5e3d484a5b43b481f6734e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:11:55.400321    9387 certs.go:381] copying /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt.bba745c7 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt
	I0327 11:11:55.400465    9387 certs.go:385] copying /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key.bba745c7 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key
	I0327 11:11:55.400744    9387 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/proxy-client.key
	I0327 11:11:55.400921    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/6476.pem (1338 bytes)
	W0327 11:11:55.401128    9387 certs.go:480] ignoring /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/6476_empty.pem, impossibly tiny 0 bytes
	I0327 11:11:55.401134    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 11:11:55.401158    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem (1082 bytes)
	I0327 11:11:55.401180    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem (1123 bytes)
	I0327 11:11:55.401198    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/key.pem (1679 bytes)
	I0327 11:11:55.401241    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem (1708 bytes)
	I0327 11:11:55.401571    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 11:11:55.408042    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 11:11:55.414850    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 11:11:55.422500    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 11:11:55.429393    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 11:11:55.436223    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 11:11:55.443139    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 11:11:55.450536    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 11:11:55.457174    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 11:11:55.463322    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/6476.pem --> /usr/share/ca-certificates/6476.pem (1338 bytes)
	I0327 11:11:55.470336    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem --> /usr/share/ca-certificates/64762.pem (1708 bytes)
	I0327 11:11:55.477089    9387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 11:11:55.481923    9387 ssh_runner.go:195] Run: openssl version
	I0327 11:11:55.483757    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 11:11:55.487082    9387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 11:11:55.488438    9387 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 18:08 /usr/share/ca-certificates/minikubeCA.pem
	I0327 11:11:55.488460    9387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 11:11:55.490078    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 11:11:55.492879    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6476.pem && ln -fs /usr/share/ca-certificates/6476.pem /etc/ssl/certs/6476.pem"
	I0327 11:11:55.495636    9387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6476.pem
	I0327 11:11:55.497085    9387 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 17:55 /usr/share/ca-certificates/6476.pem
	I0327 11:11:55.497103    9387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6476.pem
	I0327 11:11:55.498790    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6476.pem /etc/ssl/certs/51391683.0"
	I0327 11:11:55.502070    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64762.pem && ln -fs /usr/share/ca-certificates/64762.pem /etc/ssl/certs/64762.pem"
	I0327 11:11:55.504800    9387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64762.pem
	I0327 11:11:55.506035    9387 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 17:55 /usr/share/ca-certificates/64762.pem
	I0327 11:11:55.506051    9387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64762.pem
	I0327 11:11:55.507766    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64762.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 11:11:55.511120    9387 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 11:11:55.512587    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 11:11:55.514760    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 11:11:55.517081    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 11:11:55.519031    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 11:11:55.520781    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 11:11:55.522513    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 11:11:55.524347    9387 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51357 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-042000 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 11:11:55.524411    9387 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 11:11:55.534606    9387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 11:11:55.538023    9387 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 11:11:55.538030    9387 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 11:11:55.538034    9387 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 11:11:55.538062    9387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 11:11:55.540783    9387 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 11:11:55.541064    9387 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-042000" does not appear in /Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:11:55.541157    9387 kubeconfig.go:62] /Users/jenkins/minikube-integration/18517-6013/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-042000" cluster setting kubeconfig missing "stopped-upgrade-042000" context setting]
	I0327 11:11:55.541330    9387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/kubeconfig: {Name:mkbb2a293ed6f55faa985236f49efd479958575e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:11:55.541773    9387 kapi.go:59] client config for stopped-upgrade-042000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.key", CAFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060d6b90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 11:11:55.542205    9387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 11:11:55.544858    9387 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-042000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0327 11:11:55.544865    9387 kubeadm.go:1154] stopping kube-system containers ...
	I0327 11:11:55.544900    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 11:11:55.556022    9387 docker.go:483] Stopping containers: [0dc5ad39c5a3 2358dbbd7710 cef10ae8ee0a 564c237001b0 7d1480bb2f78 966c2fcaeaec 2ce9bf2e1447 5223b5e56540]
	I0327 11:11:55.556090    9387 ssh_runner.go:195] Run: docker stop 0dc5ad39c5a3 2358dbbd7710 cef10ae8ee0a 564c237001b0 7d1480bb2f78 966c2fcaeaec 2ce9bf2e1447 5223b5e56540
	I0327 11:11:55.567193    9387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 11:11:55.572883    9387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 11:11:55.575516    9387 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 11:11:55.575521    9387 kubeadm.go:156] found existing configuration files:
	
	I0327 11:11:55.575545    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/admin.conf
	I0327 11:11:55.578463    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 11:11:55.578482    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 11:11:55.581247    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/kubelet.conf
	I0327 11:11:55.583532    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 11:11:55.583549    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 11:11:55.586525    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/controller-manager.conf
	I0327 11:11:55.589126    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 11:11:55.589147    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 11:11:55.591475    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/scheduler.conf
	I0327 11:11:55.594445    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 11:11:55.594466    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 11:11:55.597152    9387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 11:11:55.599763    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:55.621813    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:56.066174    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:56.195944    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:56.216679    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:56.239995    9387 api_server.go:52] waiting for apiserver process to appear ...
	I0327 11:11:56.240068    9387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:11:56.610656    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:11:56.610933    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:11:56.634677    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:11:56.634792    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:11:56.651287    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:11:56.651375    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:11:56.664272    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:11:56.664347    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:11:56.678839    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:11:56.678911    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:11:56.689495    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:11:56.689571    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:11:56.699430    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:11:56.699522    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:11:56.710809    9149 logs.go:276] 0 containers: []
	W0327 11:11:56.710819    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:11:56.710872    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:11:56.723300    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:11:56.723319    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:11:56.723326    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:11:56.739537    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:11:56.739549    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:11:56.753159    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:11:56.753173    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:11:56.766485    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:11:56.766499    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:11:56.779403    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:11:56.779430    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:11:56.817911    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:11:56.817934    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:11:56.849853    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:11:56.849888    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:11:56.868930    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:11:56.868955    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:11:56.883551    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:11:56.883562    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:11:56.897989    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:11:56.898000    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:11:56.910153    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:11:56.910166    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:11:56.930326    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:11:56.930338    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:11:56.942072    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:11:56.942085    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:11:56.959338    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:11:56.959349    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:11:56.985003    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:11:56.985020    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:11:56.989914    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:11:56.989925    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:11:56.742107    9387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:11:57.242192    9387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:11:57.250063    9387 api_server.go:72] duration metric: took 1.010075375s to wait for apiserver process to appear ...
	I0327 11:11:57.250075    9387 api_server.go:88] waiting for apiserver healthz status ...
	I0327 11:11:57.250090    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:11:59.532861    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:02.252181    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:02.252223    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:04.535065    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:04.535179    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:04.546749    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:04.546813    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:04.557681    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:04.557745    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:04.568380    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:04.568446    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:04.578900    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:04.578964    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:04.589863    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:04.589923    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:04.600378    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:04.600450    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:04.610680    9149 logs.go:276] 0 containers: []
	W0327 11:12:04.610690    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:04.610742    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:04.621105    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:04.621122    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:04.621127    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:04.655902    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:04.655916    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:04.683685    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:04.683698    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:04.698604    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:04.698618    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:04.713405    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:04.713415    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:04.724890    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:04.724901    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:04.737510    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:04.737523    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:04.773395    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:04.773403    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:04.777451    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:04.777457    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:04.794725    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:04.794735    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:04.806855    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:04.806866    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:04.818201    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:04.818212    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:04.833680    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:04.833690    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:04.845981    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:04.845990    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:04.862349    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:04.862359    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:04.886869    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:04.886879    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:07.405372    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:07.252458    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:07.252506    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:12.407551    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:12.407651    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:12.420484    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:12.420558    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:12.432521    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:12.432598    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:12.444000    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:12.444069    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:12.455598    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:12.455675    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:12.465770    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:12.465837    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:12.476458    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:12.476528    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:12.486430    9149 logs.go:276] 0 containers: []
	W0327 11:12:12.486439    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:12.486489    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:12.501047    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:12.501063    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:12.501068    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:12.516810    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:12.516824    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:12.529144    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:12.529154    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:12.542889    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:12.542898    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:12.567277    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:12.567288    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:12.580934    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:12.580947    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:12.595729    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:12.595740    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:12.612715    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:12.612726    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:12.635235    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:12.635242    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:12.671064    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:12.671077    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:12.675624    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:12.675633    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:12.687069    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:12.687080    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:12.698621    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:12.698634    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:12.734550    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:12.734565    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:12.749089    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:12.749100    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:12.760314    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:12.760325    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:12.252846    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:12.252904    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:15.273749    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:17.253523    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:17.253569    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:20.274118    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:20.274446    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:20.295417    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:20.295518    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:20.311061    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:20.311135    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:20.323157    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:20.323224    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:20.333757    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:20.333822    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:20.348859    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:20.348925    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:20.359873    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:20.359934    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:20.370417    9149 logs.go:276] 0 containers: []
	W0327 11:12:20.370427    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:20.370478    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:20.380883    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:20.380900    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:20.380905    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:20.385665    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:20.385671    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:20.397174    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:20.397187    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:20.408525    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:20.408534    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:20.422232    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:20.422246    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:20.436922    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:20.436937    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:20.449046    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:20.449059    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:20.466477    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:20.466487    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:20.490355    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:20.490366    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:20.517202    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:20.517217    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:20.528379    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:20.528389    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:20.539418    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:20.539428    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:20.551444    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:20.551458    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:20.585836    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:20.585845    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:20.621087    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:20.621101    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:20.634751    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:20.634764    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:22.254153    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:22.254186    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:23.152501    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:27.254999    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:27.255046    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:28.154418    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:28.154601    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:28.172614    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:28.172700    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:28.185516    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:28.185593    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:28.196564    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:28.196631    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:28.207563    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:28.207635    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:28.218070    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:28.218131    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:28.228527    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:28.228592    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:28.238827    9149 logs.go:276] 0 containers: []
	W0327 11:12:28.238841    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:28.238897    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:28.249553    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:28.249571    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:28.249577    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:28.254029    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:28.254037    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:28.267135    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:28.267146    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:28.279321    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:28.279335    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:28.294021    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:28.294031    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:28.315159    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:28.315170    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:28.330227    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:28.330239    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:28.367431    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:28.367439    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:28.381942    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:28.381953    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:28.405759    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:28.405769    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:28.419588    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:28.419599    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:28.431579    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:28.431590    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:28.443893    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:28.443903    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:28.479472    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:28.479482    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:28.501713    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:28.501725    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:28.515155    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:28.515165    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:31.029396    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:32.256117    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:32.256167    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:36.031773    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:36.031979    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:36.051226    9149 logs.go:276] 2 containers: [1cf791fdb57f 80d88951e124]
	I0327 11:12:36.051324    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:36.065197    9149 logs.go:276] 2 containers: [950f1b63ad60 170368f0be1d]
	I0327 11:12:36.065263    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:36.090300    9149 logs.go:276] 1 containers: [67b499cfbe2c]
	I0327 11:12:36.090371    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:36.101181    9149 logs.go:276] 2 containers: [882df88a1725 11524bd3206a]
	I0327 11:12:36.101248    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:36.111539    9149 logs.go:276] 1 containers: [6228842eb636]
	I0327 11:12:36.111611    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:36.121771    9149 logs.go:276] 2 containers: [9c6807a2f112 7e60cccbe5e9]
	I0327 11:12:36.121843    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:36.132020    9149 logs.go:276] 0 containers: []
	W0327 11:12:36.132033    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:36.132087    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:36.142356    9149 logs.go:276] 1 containers: [09e58127daa7]
	I0327 11:12:36.142372    9149 logs.go:123] Gathering logs for kube-proxy [6228842eb636] ...
	I0327 11:12:36.142377    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6228842eb636"
	I0327 11:12:36.156528    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:12:36.156540    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:36.168197    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:36.168209    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:36.172671    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:36.172680    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:36.207327    9149 logs.go:123] Gathering logs for coredns [67b499cfbe2c] ...
	I0327 11:12:36.207341    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b499cfbe2c"
	I0327 11:12:36.218906    9149 logs.go:123] Gathering logs for kube-scheduler [11524bd3206a] ...
	I0327 11:12:36.218918    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11524bd3206a"
	I0327 11:12:36.234787    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:36.234799    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:12:36.270602    9149 logs.go:123] Gathering logs for etcd [170368f0be1d] ...
	I0327 11:12:36.270611    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 170368f0be1d"
	I0327 11:12:36.289158    9149 logs.go:123] Gathering logs for storage-provisioner [09e58127daa7] ...
	I0327 11:12:36.289170    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e58127daa7"
	I0327 11:12:36.300559    9149 logs.go:123] Gathering logs for kube-scheduler [882df88a1725] ...
	I0327 11:12:36.300571    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882df88a1725"
	I0327 11:12:36.312791    9149 logs.go:123] Gathering logs for kube-controller-manager [7e60cccbe5e9] ...
	I0327 11:12:36.312802    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e60cccbe5e9"
	I0327 11:12:36.324767    9149 logs.go:123] Gathering logs for etcd [950f1b63ad60] ...
	I0327 11:12:36.324778    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950f1b63ad60"
	I0327 11:12:36.339659    9149 logs.go:123] Gathering logs for kube-controller-manager [9c6807a2f112] ...
	I0327 11:12:36.339670    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6807a2f112"
	I0327 11:12:36.357420    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:36.357430    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:36.380358    9149 logs.go:123] Gathering logs for kube-apiserver [1cf791fdb57f] ...
	I0327 11:12:36.380365    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf791fdb57f"
	I0327 11:12:36.394527    9149 logs.go:123] Gathering logs for kube-apiserver [80d88951e124] ...
	I0327 11:12:36.394540    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d88951e124"
	I0327 11:12:37.257630    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:37.257675    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:38.919598    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:43.922272    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:43.922356    9149 kubeadm.go:591] duration metric: took 4m3.963085541s to restartPrimaryControlPlane
	W0327 11:12:43.922426    9149 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 11:12:43.922456    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 11:12:44.965867    9149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0434055s)
	I0327 11:12:44.965923    9149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 11:12:44.970738    9149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 11:12:44.973513    9149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 11:12:44.976101    9149 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 11:12:44.976107    9149 kubeadm.go:156] found existing configuration files:
	
	I0327 11:12:44.976132    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/admin.conf
	I0327 11:12:44.978778    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 11:12:44.978803    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 11:12:44.981172    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/kubelet.conf
	I0327 11:12:44.983892    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 11:12:44.983910    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 11:12:44.986960    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/controller-manager.conf
	I0327 11:12:44.989588    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 11:12:44.989609    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 11:12:44.992464    9149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/scheduler.conf
	I0327 11:12:44.996085    9149 kubeadm.go:162] "https://control-plane.minikube.internal:51147" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51147 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 11:12:44.996131    9149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 11:12:44.999704    9149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 11:12:45.018103    9149 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 11:12:45.018214    9149 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 11:12:45.065986    9149 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 11:12:45.066044    9149 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 11:12:45.066100    9149 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 11:12:45.117155    9149 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 11:12:45.121543    9149 out.go:204]   - Generating certificates and keys ...
	I0327 11:12:45.121578    9149 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 11:12:45.121610    9149 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 11:12:45.121645    9149 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 11:12:45.121722    9149 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 11:12:45.121761    9149 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 11:12:45.121788    9149 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 11:12:45.121871    9149 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 11:12:45.121954    9149 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 11:12:45.122006    9149 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 11:12:45.122089    9149 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 11:12:45.122113    9149 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 11:12:45.122140    9149 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 11:12:45.211182    9149 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 11:12:45.357171    9149 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 11:12:45.527405    9149 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 11:12:45.566980    9149 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 11:12:45.600100    9149 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 11:12:45.600646    9149 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 11:12:45.600668    9149 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 11:12:45.676139    9149 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 11:12:42.259408    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:42.259434    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:45.680009    9149 out.go:204]   - Booting up control plane ...
	I0327 11:12:45.680061    9149 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 11:12:45.680113    9149 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 11:12:45.680148    9149 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 11:12:45.680205    9149 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 11:12:45.680870    9149 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 11:12:50.184328    9149 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502751 seconds
	I0327 11:12:50.184399    9149 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 11:12:50.187905    9149 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 11:12:50.703160    9149 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 11:12:50.703352    9149 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-599000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 11:12:51.207533    9149 kubeadm.go:309] [bootstrap-token] Using token: 644p81.qjqgd0r862w3b1j6
	I0327 11:12:47.261566    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:47.261627    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:51.209720    9149 out.go:204]   - Configuring RBAC rules ...
	I0327 11:12:51.209781    9149 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 11:12:51.209884    9149 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 11:12:51.216350    9149 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 11:12:51.217375    9149 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 11:12:51.218278    9149 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 11:12:51.219271    9149 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 11:12:51.222356    9149 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 11:12:51.403198    9149 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 11:12:51.612736    9149 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 11:12:51.613209    9149 kubeadm.go:309] 
	I0327 11:12:51.613249    9149 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 11:12:51.613252    9149 kubeadm.go:309] 
	I0327 11:12:51.613315    9149 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 11:12:51.613321    9149 kubeadm.go:309] 
	I0327 11:12:51.613350    9149 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 11:12:51.613392    9149 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 11:12:51.613423    9149 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 11:12:51.613426    9149 kubeadm.go:309] 
	I0327 11:12:51.613453    9149 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 11:12:51.613461    9149 kubeadm.go:309] 
	I0327 11:12:51.613484    9149 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 11:12:51.613487    9149 kubeadm.go:309] 
	I0327 11:12:51.613513    9149 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 11:12:51.613570    9149 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 11:12:51.613610    9149 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 11:12:51.613616    9149 kubeadm.go:309] 
	I0327 11:12:51.613662    9149 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 11:12:51.613707    9149 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 11:12:51.613713    9149 kubeadm.go:309] 
	I0327 11:12:51.613755    9149 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 644p81.qjqgd0r862w3b1j6 \
	I0327 11:12:51.613820    9149 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:45ef07f6bc58c462e2dd74e42d978dd37f899f74c2cb02d53ec43ace0e53a157 \
	I0327 11:12:51.613832    9149 kubeadm.go:309] 	--control-plane 
	I0327 11:12:51.613834    9149 kubeadm.go:309] 
	I0327 11:12:51.613879    9149 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 11:12:51.613883    9149 kubeadm.go:309] 
	I0327 11:12:51.613931    9149 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 644p81.qjqgd0r862w3b1j6 \
	I0327 11:12:51.613997    9149 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:45ef07f6bc58c462e2dd74e42d978dd37f899f74c2cb02d53ec43ace0e53a157 
	I0327 11:12:51.614060    9149 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 11:12:51.614067    9149 cni.go:84] Creating CNI manager for ""
	I0327 11:12:51.614075    9149 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:12:51.617914    9149 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 11:12:51.623103    9149 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 11:12:51.626186    9149 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 11:12:51.632123    9149 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 11:12:51.632176    9149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 11:12:51.632252    9149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-599000 minikube.k8s.io/updated_at=2024_03_27T11_12_51_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28 minikube.k8s.io/name=running-upgrade-599000 minikube.k8s.io/primary=true
	I0327 11:12:51.680424    9149 ops.go:34] apiserver oom_adj: -16
	I0327 11:12:51.680480    9149 kubeadm.go:1107] duration metric: took 48.354125ms to wait for elevateKubeSystemPrivileges
	W0327 11:12:51.680500    9149 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 11:12:51.680503    9149 kubeadm.go:393] duration metric: took 4m11.735837958s to StartCluster
	I0327 11:12:51.680513    9149 settings.go:142] acquiring lock: {Name:mkcc43d09d3f5881bfa353312da863c5f99807b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:12:51.680676    9149 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:12:51.681036    9149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/kubeconfig: {Name:mkbb2a293ed6f55faa985236f49efd479958575e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:12:51.681234    9149 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:12:51.685883    9149 out.go:177] * Verifying Kubernetes components...
	I0327 11:12:51.681257    9149 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 11:12:51.681428    9149 config.go:182] Loaded profile config "running-upgrade-599000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:12:51.693935    9149 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-599000"
	I0327 11:12:51.693940    9149 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-599000"
	I0327 11:12:51.693948    9149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:12:51.693962    9149 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-599000"
	W0327 11:12:51.693966    9149 addons.go:243] addon storage-provisioner should already be in state true
	I0327 11:12:51.693976    9149 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-599000"
	I0327 11:12:51.694000    9149 host.go:66] Checking if "running-upgrade-599000" exists ...
	I0327 11:12:51.695106    9149 kapi.go:59] client config for running-upgrade-599000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/running-upgrade-599000/client.key", CAFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10432eb90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 11:12:51.695675    9149 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-599000"
	W0327 11:12:51.695679    9149 addons.go:243] addon default-storageclass should already be in state true
	I0327 11:12:51.695686    9149 host.go:66] Checking if "running-upgrade-599000" exists ...
	I0327 11:12:51.700909    9149 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:12:51.703912    9149 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 11:12:51.703917    9149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 11:12:51.703924    9149 sshutil.go:53] new ssh client: &{IP:localhost Port:51115 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/running-upgrade-599000/id_rsa Username:docker}
	I0327 11:12:51.704662    9149 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 11:12:51.704667    9149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 11:12:51.704671    9149 sshutil.go:53] new ssh client: &{IP:localhost Port:51115 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/running-upgrade-599000/id_rsa Username:docker}
	I0327 11:12:51.782993    9149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 11:12:51.787726    9149 api_server.go:52] waiting for apiserver process to appear ...
	I0327 11:12:51.787781    9149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:12:51.792726    9149 api_server.go:72] duration metric: took 111.479125ms to wait for apiserver process to appear ...
	I0327 11:12:51.792738    9149 api_server.go:88] waiting for apiserver healthz status ...
	I0327 11:12:51.792746    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:51.857171    9149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 11:12:51.858459    9149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 11:12:52.263801    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:52.263819    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:56.794871    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:56.794915    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:57.266018    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:57.266320    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:57.291609    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:12:57.291735    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:57.310085    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:12:57.310173    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:57.323022    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:12:57.323100    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:57.334306    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:12:57.334389    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:57.346419    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:12:57.346486    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:57.360659    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:12:57.360732    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:57.380211    9387 logs.go:276] 0 containers: []
	W0327 11:12:57.380221    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:57.380277    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:57.390678    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:12:57.390702    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:12:57.390706    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:12:57.406063    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:12:57.406073    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:12:57.416815    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:12:57.416825    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:12:57.434450    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:12:57.434460    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:12:57.445584    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:12:57.445594    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:57.458455    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:12:57.458464    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:12:57.473025    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:12:57.473035    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:12:57.493779    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:12:57.493790    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:12:57.507243    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:12:57.507257    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:12:57.524483    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:57.524493    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:57.632924    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:12:57.632944    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:12:57.645229    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:57.645244    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:57.649814    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:12:57.649822    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:12:57.689283    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:12:57.689299    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:12:57.707139    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:12:57.707152    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:12:57.723766    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:57.723776    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:57.749225    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:57.749241    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:00.288642    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:01.795261    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:01.795284    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:05.290903    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:05.291046    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:05.303069    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:05.303145    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:05.314094    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:05.314168    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:05.327268    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:05.327332    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:05.339594    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:05.339666    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:05.349870    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:05.349932    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:05.361026    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:05.361094    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:05.371675    9387 logs.go:276] 0 containers: []
	W0327 11:13:05.371693    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:05.371763    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:05.381958    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:05.381978    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:05.381982    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:05.396561    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:05.396574    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:05.409429    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:05.409441    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:05.422674    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:05.422685    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:05.459906    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:05.459918    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:05.471697    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:05.471709    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:05.484271    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:05.484283    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:05.522220    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:05.522232    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:05.537648    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:05.537660    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:05.558767    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:05.558780    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:05.573472    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:05.573483    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:05.610592    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:05.610605    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:05.623362    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:05.623372    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:05.642952    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:05.642965    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:05.667859    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:05.667873    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:05.672022    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:05.672033    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:05.687341    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:05.687360    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:06.795677    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:06.795711    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:08.203734    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:11.795956    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:11.796028    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:13.205816    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:13.206202    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:13.250432    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:13.250582    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:13.271216    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:13.271310    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:13.285435    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:13.285525    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:13.297961    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:13.298038    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:13.311832    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:13.311896    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:13.322977    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:13.323050    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:13.332837    9387 logs.go:276] 0 containers: []
	W0327 11:13:13.332850    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:13.332907    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:13.344613    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:13.344632    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:13.344639    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:13.349290    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:13.349298    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:13.363382    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:13.363391    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:13.375124    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:13.375134    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:13.386346    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:13.386356    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:13.398131    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:13.398140    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:13.421691    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:13.421700    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:13.462056    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:13.462066    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:13.480824    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:13.480835    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:13.518674    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:13.518686    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:13.533413    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:13.533425    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:13.550954    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:13.550965    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:13.570726    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:13.570736    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:13.582595    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:13.582606    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:13.621534    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:13.621545    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:13.636528    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:13.636539    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:13.647953    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:13.647964    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:16.165116    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:16.796568    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:16.796627    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:21.167335    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:21.167500    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:21.178738    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:21.178815    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:21.194939    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:21.195008    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:21.205315    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:21.205378    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:21.215658    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:21.215735    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:21.226017    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:21.226091    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:21.247531    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:21.247600    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:21.257522    9387 logs.go:276] 0 containers: []
	W0327 11:13:21.257534    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:21.257597    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:21.275755    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:21.275788    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:21.275794    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:21.797334    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:21.797376    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 11:13:22.193658    9149 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 11:13:22.197961    9149 out.go:177] * Enabled addons: storage-provisioner
	I0327 11:13:22.210969    9149 addons.go:505] duration metric: took 30.530524167s for enable addons: enabled=[storage-provisioner]
	I0327 11:13:21.314812    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:21.318375    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:21.330840    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:21.330854    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:21.350406    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:21.350418    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:21.361756    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:21.361768    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:21.398058    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:21.398068    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:21.401934    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:21.401940    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:21.415951    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:21.415965    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:21.431160    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:21.431172    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:21.446144    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:21.446154    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:21.458410    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:21.458425    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:21.494310    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:21.494324    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:21.511826    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:21.511843    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:21.523941    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:21.523953    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:21.539721    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:21.539732    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:21.552312    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:21.552324    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:21.573417    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:21.573432    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:24.100410    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:26.797840    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:26.797893    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:29.102529    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:29.102706    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:29.115121    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:29.115200    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:29.130647    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:29.130709    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:29.141626    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:29.141693    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:29.152711    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:29.152784    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:29.164528    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:29.164598    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:29.175283    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:29.175350    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:29.185258    9387 logs.go:276] 0 containers: []
	W0327 11:13:29.185268    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:29.185316    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:29.195061    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:29.195080    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:29.195085    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:29.209642    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:29.209654    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:29.221294    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:29.221305    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:29.236501    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:29.236512    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:29.248087    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:29.248101    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:29.259262    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:29.259273    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:29.296396    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:29.296408    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:29.300871    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:29.300878    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:29.314632    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:29.314643    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:29.352317    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:29.352332    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:29.367991    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:29.368003    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:29.379730    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:29.379741    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:29.398437    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:29.398446    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:29.413221    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:29.413235    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:29.437791    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:29.437798    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:29.473834    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:29.473845    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:29.485911    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:29.485922    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:31.799039    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:31.799069    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:31.999795    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:36.799668    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:36.799688    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:37.002001    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:37.002181    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:37.018823    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:37.018908    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:37.031605    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:37.031671    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:37.043282    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:37.043343    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:37.054193    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:37.054259    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:37.065711    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:37.065779    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:37.078301    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:37.078365    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:37.088195    9387 logs.go:276] 0 containers: []
	W0327 11:13:37.088210    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:37.088263    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:37.098548    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:37.098568    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:37.098574    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:37.114206    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:37.114225    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:37.138169    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:37.138177    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:37.175296    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:37.175307    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:37.179276    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:37.179283    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:37.219081    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:37.219091    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:37.233603    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:37.233613    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:37.248406    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:37.248415    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:37.260475    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:37.260490    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:37.278487    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:37.278498    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:37.289600    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:37.289611    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:37.302004    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:37.302018    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:37.314160    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:37.314171    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:37.325773    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:37.325784    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:37.366632    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:37.366645    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:37.382394    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:37.382407    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:37.396694    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:37.396706    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:39.911735    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:41.801365    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:41.801421    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:44.914029    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:44.914470    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:44.949181    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:44.949308    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:44.971605    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:44.971689    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:44.985270    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:44.985353    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:44.997912    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:44.997982    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:45.008641    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:45.008697    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:45.019186    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:45.019253    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:45.029497    9387 logs.go:276] 0 containers: []
	W0327 11:13:45.029509    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:45.029566    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:45.040512    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:45.040532    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:45.040538    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:45.082979    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:45.082992    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:45.096775    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:45.096786    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:45.108417    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:45.108429    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:45.119590    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:45.119602    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:45.143339    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:45.143350    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:45.157762    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:45.157773    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:45.170332    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:45.170343    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:45.187736    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:45.187746    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:45.202654    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:45.202664    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:45.241612    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:45.241621    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:45.245829    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:45.245837    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:45.259615    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:45.259624    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:45.274129    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:45.274140    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:45.289448    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:45.289460    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:45.327995    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:45.328008    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:45.342130    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:45.342143    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:46.803577    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:46.803614    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:47.856836    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:51.805787    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:51.805912    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:51.833275    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:13:51.833348    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:51.843728    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:13:51.843790    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:51.855232    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:13:51.855296    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:51.865988    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:13:51.866051    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:51.879830    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:13:51.879896    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:51.890715    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:13:51.890769    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:51.900838    9149 logs.go:276] 0 containers: []
	W0327 11:13:51.900848    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:51.900900    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:51.916633    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:13:51.916646    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:13:51.916652    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:13:51.928188    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:13:51.928199    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:13:51.940573    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:13:51.940587    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:13:51.952325    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:51.952335    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:51.977019    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:13:51.977027    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:13:51.991207    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:51.991217    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:51.995501    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:51.995507    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:52.031325    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:13:52.031338    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:13:52.045373    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:13:52.045383    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:13:52.056660    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:13:52.056671    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:13:52.071593    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:13:52.071602    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:13:52.088924    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:13:52.088935    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:52.100772    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:52.100785    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:52.859340    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:52.859483    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:52.874212    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:52.874291    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:52.886149    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:52.886220    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:52.896589    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:52.896656    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:52.906926    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:52.906993    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:52.925382    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:52.925447    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:52.935559    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:52.935617    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:52.953585    9387 logs.go:276] 0 containers: []
	W0327 11:13:52.953596    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:52.953653    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:52.964517    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:52.964534    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:52.964540    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:52.975355    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:52.975367    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:52.991403    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:52.991419    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:53.005800    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:53.005811    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:53.017193    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:53.017206    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:53.053501    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:53.053513    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:53.066993    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:53.067004    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:53.080668    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:53.080681    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:53.119626    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:53.119638    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:53.132035    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:53.132045    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:53.144537    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:53.144551    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:53.148684    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:53.148691    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:53.163311    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:53.163322    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:53.177717    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:53.177727    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:53.192618    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:53.192629    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:53.209453    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:53.209465    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:53.233643    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:53.233651    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:55.773096    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:54.641238    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:00.775471    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:00.775806    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:00.808072    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:00.808200    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:00.827630    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:00.827711    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:00.841635    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:00.841722    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:00.853846    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:00.853921    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:00.864642    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:00.864709    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:00.875552    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:00.875612    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:00.886054    9387 logs.go:276] 0 containers: []
	W0327 11:14:00.886065    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:00.886111    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:00.896457    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:00.896476    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:00.896481    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:00.931767    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:00.931777    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:00.947927    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:00.947940    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:00.960323    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:00.960336    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:00.971807    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:00.971821    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:00.988805    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:00.988816    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:00.992777    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:00.992784    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:01.006211    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:01.006222    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:01.045104    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:01.045117    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:01.060042    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:01.060052    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:01.075477    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:01.075488    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:01.100413    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:01.100421    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:01.137727    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:01.137735    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:01.148824    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:01.148834    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:01.163997    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:01.164007    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:01.175440    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:01.175450    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:01.186877    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:01.186888    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:59.643447    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:59.643619    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:59.655476    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:13:59.655545    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:59.667844    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:13:59.667908    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:59.678127    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:13:59.678187    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:59.692830    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:13:59.692888    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:59.702871    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:13:59.702929    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:59.713611    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:13:59.713669    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:59.726367    9149 logs.go:276] 0 containers: []
	W0327 11:13:59.726377    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:59.726426    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:59.736981    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:13:59.736998    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:59.737003    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:59.773421    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:59.773431    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:59.777694    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:13:59.777703    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:13:59.793892    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:13:59.793904    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:13:59.807548    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:13:59.807562    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:13:59.819488    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:13:59.819498    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:13:59.831167    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:13:59.831177    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:13:59.848695    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:59.848706    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:59.873820    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:59.873830    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:59.908165    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:13:59.908176    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:13:59.920506    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:13:59.920517    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:13:59.935584    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:13:59.935593    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:13:59.947292    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:13:59.947316    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:02.460158    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:03.701072    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:07.461125    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:07.461510    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:07.501158    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:07.501288    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:07.528084    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:07.528172    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:07.542204    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:07.542275    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:07.553967    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:07.554029    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:07.564972    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:07.565047    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:07.576089    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:07.576157    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:07.586925    9149 logs.go:276] 0 containers: []
	W0327 11:14:07.586935    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:07.586990    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:07.597950    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:07.597967    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:07.597972    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:07.609731    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:07.609742    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:07.626068    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:07.626084    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:07.639920    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:07.639933    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:07.660253    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:07.660263    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:07.672191    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:07.672202    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:07.710779    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:07.710788    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:07.725741    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:07.725751    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:07.741138    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:07.741149    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:07.757392    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:07.757402    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:07.780666    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:07.780674    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:07.784912    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:07.784919    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:07.820714    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:07.820724    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:08.703388    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:08.703538    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:08.722704    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:08.722784    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:08.734932    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:08.735004    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:08.745584    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:08.745655    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:08.756018    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:08.756091    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:08.773190    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:08.773258    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:08.783562    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:08.783630    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:08.793285    9387 logs.go:276] 0 containers: []
	W0327 11:14:08.793298    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:08.793360    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:08.804192    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:08.804212    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:08.804217    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:08.819689    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:08.819698    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:08.830972    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:08.830984    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:08.867885    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:08.867895    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:08.881594    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:08.881603    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:08.893115    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:08.893126    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:08.905352    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:08.905365    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:08.917618    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:08.917629    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:08.942484    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:08.942495    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:08.981175    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:08.981184    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:08.995074    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:08.995088    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:09.008917    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:09.008931    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:09.045174    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:09.045187    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:09.063098    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:09.063108    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:09.077405    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:09.077419    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:09.089597    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:09.089608    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:09.102563    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:09.102578    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:10.337855    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:11.609066    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:15.340185    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:15.340536    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:15.375079    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:15.375227    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:15.395843    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:15.395921    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:15.410747    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:15.410822    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:15.422991    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:15.423051    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:15.434683    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:15.434747    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:15.446065    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:15.446130    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:15.457144    9149 logs.go:276] 0 containers: []
	W0327 11:14:15.457154    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:15.457201    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:15.468517    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:15.468533    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:15.468538    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:15.484481    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:15.484492    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:15.497841    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:15.497852    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:15.516316    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:15.516326    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:15.528201    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:15.528211    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:15.552812    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:15.552819    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:15.590803    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:15.590810    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:15.649123    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:15.649136    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:15.665419    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:15.665429    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:15.680489    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:15.680501    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:15.692949    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:15.692959    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:15.697136    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:15.697143    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:15.715586    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:15.715598    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:16.611293    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:16.611562    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:16.636348    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:16.636478    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:16.656325    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:16.656403    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:16.670288    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:16.670361    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:16.681847    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:16.681920    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:16.693161    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:16.693242    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:16.708917    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:16.708996    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:16.719696    9387 logs.go:276] 0 containers: []
	W0327 11:14:16.719706    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:16.719758    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:16.733398    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:16.733417    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:16.733422    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:16.748171    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:16.748182    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:16.759144    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:16.759156    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:16.763456    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:16.763463    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:16.797978    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:16.797991    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:16.818917    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:16.818931    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:16.833505    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:16.833514    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:16.846121    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:16.846134    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:16.861361    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:16.861371    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:16.878178    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:16.878191    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:16.889648    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:16.889659    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:16.913876    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:16.913886    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:16.952621    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:16.952630    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:16.989844    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:16.989855    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:17.007719    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:17.007729    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:17.019532    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:17.019543    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:17.032096    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:17.032110    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:19.545601    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:18.233034    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:24.547850    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:24.548027    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:24.567780    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:24.567872    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:24.581679    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:24.581759    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:24.593305    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:24.593374    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:24.604403    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:24.604472    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:24.614600    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:24.614672    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:24.626202    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:24.626274    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:24.636601    9387 logs.go:276] 0 containers: []
	W0327 11:14:24.636612    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:24.636669    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:24.648345    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:24.648367    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:24.648374    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:24.666222    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:24.666233    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:24.704103    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:24.704111    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:24.708177    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:24.708183    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:24.722798    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:24.722808    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:24.746929    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:24.746937    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:24.761742    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:24.761752    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:24.779360    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:24.779372    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:24.798915    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:24.798929    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:24.841308    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:24.841321    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:24.859323    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:24.859333    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:24.871036    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:24.871049    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:24.882772    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:24.882783    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:24.896861    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:24.896872    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:24.907863    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:24.907875    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:24.951423    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:24.951438    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:24.965720    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:24.965731    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:23.235485    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:23.235897    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:23.272193    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:23.272317    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:23.291450    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:23.291532    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:23.305938    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:23.306011    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:23.318362    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:23.318428    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:23.329111    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:23.329175    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:23.340144    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:23.340208    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:23.351561    9149 logs.go:276] 0 containers: []
	W0327 11:14:23.351572    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:23.351628    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:23.362870    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:23.362886    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:23.362891    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:23.398101    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:23.398116    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:23.412551    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:23.412561    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:23.425186    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:23.425200    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:23.437530    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:23.437545    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:23.450118    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:23.450127    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:23.461557    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:23.461571    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:23.497389    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:23.497405    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:23.501916    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:23.501922    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:23.517158    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:23.517172    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:23.528493    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:23.528502    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:23.543678    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:23.543692    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:23.562003    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:23.562013    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:26.088214    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:27.482643    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:31.090708    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:31.090945    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:31.110908    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:31.110994    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:31.125244    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:31.125310    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:31.137430    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:31.137489    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:31.153182    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:31.153240    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:31.164897    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:31.164964    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:31.177168    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:31.177248    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:31.187602    9149 logs.go:276] 0 containers: []
	W0327 11:14:31.187615    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:31.187673    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:31.198281    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:31.198293    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:31.198299    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:31.235226    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:31.235237    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:31.239992    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:31.239998    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:31.274958    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:31.274969    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:31.287035    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:31.287047    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:31.299528    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:31.299538    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:31.317697    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:31.317706    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:31.332822    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:31.332832    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:31.347610    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:31.347622    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:31.363361    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:31.363371    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:31.376192    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:31.376201    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:31.389357    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:31.389366    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:31.412911    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:31.412921    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:32.485444    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:32.485781    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:32.516083    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:32.516213    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:32.535341    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:32.535425    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:32.549424    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:32.549508    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:32.561537    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:32.561602    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:32.572100    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:32.572156    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:32.583268    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:32.583338    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:32.594177    9387 logs.go:276] 0 containers: []
	W0327 11:14:32.594191    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:32.594244    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:32.604971    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:32.604988    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:32.604995    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:32.616668    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:32.616679    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:32.627824    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:32.627838    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:32.643258    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:32.643271    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:32.647328    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:32.647338    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:32.683827    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:32.683839    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:32.698424    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:32.698436    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:32.712583    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:32.712592    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:32.726948    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:32.726958    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:32.738102    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:32.738114    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:32.749880    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:32.749891    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:32.774938    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:32.774950    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:32.813797    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:32.813809    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:32.855264    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:32.855274    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:32.867931    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:32.867944    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:32.883160    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:32.883170    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:32.897503    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:32.897515    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:35.417593    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:33.928168    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:40.419937    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:40.420231    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:40.448535    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:40.448639    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:40.470487    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:40.470567    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:40.483472    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:40.483545    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:40.495705    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:40.495774    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:40.505681    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:40.505740    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:40.515943    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:40.516000    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:40.526082    9387 logs.go:276] 0 containers: []
	W0327 11:14:40.526093    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:40.526143    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:40.536748    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:40.536768    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:40.536775    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:40.541143    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:40.541148    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:40.555421    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:40.555432    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:40.567485    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:40.567495    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:40.590328    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:40.590336    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:40.603660    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:40.603670    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:40.638913    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:40.638925    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:40.676989    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:40.676999    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:40.688451    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:40.688465    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:40.699997    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:40.700006    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:40.713767    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:40.713778    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:40.725238    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:40.725248    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:40.742027    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:40.742038    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:40.756180    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:40.756190    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:40.767415    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:40.767427    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:40.805683    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:40.805692    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:40.820175    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:40.820189    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:38.930466    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:38.930708    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:38.951175    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:38.951263    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:38.965479    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:38.965542    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:38.977373    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:38.977435    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:38.988220    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:38.988285    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:38.998357    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:38.998418    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:39.012985    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:39.013042    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:39.022978    9149 logs.go:276] 0 containers: []
	W0327 11:14:39.022987    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:39.023037    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:39.033726    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:39.033742    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:39.033747    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:39.070637    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:39.070646    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:39.075364    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:39.075371    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:39.087939    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:39.087950    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:39.099966    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:39.099977    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:39.124530    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:39.124546    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:39.137215    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:39.137227    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:39.172459    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:39.172471    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:39.186656    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:39.186667    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:39.200347    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:39.200360    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:39.215099    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:39.215110    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:39.226710    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:39.226720    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:39.243879    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:39.243890    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:41.757872    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:43.337297    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:46.760166    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:46.760325    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:46.771095    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:46.771171    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:46.782068    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:46.782145    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:46.792673    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:46.792742    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:46.803475    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:46.803545    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:46.814799    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:46.814865    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:46.825529    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:46.825586    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:46.836293    9149 logs.go:276] 0 containers: []
	W0327 11:14:46.836305    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:46.836375    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:46.846553    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:46.846567    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:46.846572    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:46.861474    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:46.861484    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:46.872850    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:46.872859    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:46.884468    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:46.884477    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:46.895405    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:46.895414    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:46.910661    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:46.910671    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:46.915689    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:46.915698    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:46.953418    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:46.953431    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:46.967976    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:46.967986    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:46.979097    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:46.979108    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:46.990846    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:46.990857    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:47.008583    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:47.008593    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:47.033430    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:47.033438    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:48.338565    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:48.339231    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:48.374946    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:48.375084    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:48.396576    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:48.396682    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:48.412066    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:48.412148    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:48.424176    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:48.424247    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:48.434845    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:48.434907    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:48.445502    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:48.445572    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:48.455702    9387 logs.go:276] 0 containers: []
	W0327 11:14:48.455715    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:48.455772    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:48.466352    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:48.466373    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:48.466379    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:48.504887    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:48.504897    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:48.516559    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:48.516570    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:48.529751    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:48.529762    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:48.542191    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:48.542202    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:48.564877    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:48.564885    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:48.601491    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:48.601499    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:48.636755    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:48.636768    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:48.652576    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:48.652587    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:48.663508    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:48.663518    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:48.678127    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:48.678138    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:48.696080    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:48.696093    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:48.715743    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:48.715753    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:48.719914    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:48.719923    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:48.733393    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:48.733403    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:48.751127    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:48.751136    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:48.763187    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:48.763196    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:51.280272    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:49.572392    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:56.282904    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:56.283076    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:56.301767    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:56.301851    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:54.574741    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:54.574904    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:54.593883    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:14:54.593987    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:54.607928    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:14:54.608004    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:54.619323    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:14:54.619386    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:54.629418    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:14:54.629485    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:54.642962    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:14:54.643027    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:54.658108    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:14:54.658177    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:54.667702    9149 logs.go:276] 0 containers: []
	W0327 11:14:54.667714    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:54.667773    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:54.678253    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:14:54.678270    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:14:54.678276    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:14:54.693139    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:14:54.693149    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:14:54.704917    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:14:54.704926    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:14:54.722168    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:14:54.722181    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:14:54.734534    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:14:54.734546    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:54.747063    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:54.747082    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:54.751684    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:54.751696    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:54.785638    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:14:54.785649    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:14:54.800395    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:14:54.800405    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:14:54.814618    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:14:54.814629    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:14:54.826581    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:14:54.826596    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:14:54.838351    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:54.838364    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:54.862311    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:54.862320    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:57.403056    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:56.315620    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:56.315694    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:56.326816    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:56.326882    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:56.339105    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:56.339179    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:56.349538    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:56.349605    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:56.360259    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:56.360326    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:56.370232    9387 logs.go:276] 0 containers: []
	W0327 11:14:56.370248    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:56.370304    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:56.381278    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:56.381298    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:56.381303    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:56.418227    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:56.418238    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:56.432382    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:56.432391    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:56.443328    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:56.443343    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:56.447628    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:56.447635    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:56.458855    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:56.458866    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:56.471202    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:56.471215    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:56.485977    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:56.485990    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:56.503626    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:56.503638    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:56.526018    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:56.526025    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:56.540132    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:56.540143    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:56.551356    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:56.551367    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:56.568756    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:56.568767    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:56.583941    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:56.583951    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:56.596679    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:56.596691    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:56.634419    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:56.634430    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:56.676223    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:56.676233    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:59.191114    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:02.405487    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:02.405679    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:02.425794    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:02.425885    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:02.439956    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:02.440034    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:02.451722    9149 logs.go:276] 2 containers: [7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:02.451794    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:02.461876    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:02.461933    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:02.472659    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:02.472732    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:02.483365    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:02.483426    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:02.493010    9149 logs.go:276] 0 containers: []
	W0327 11:15:02.493022    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:02.493079    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:02.503478    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:02.503493    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:02.503499    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:02.515024    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:02.515037    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:02.529317    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:02.529328    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:02.545211    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:02.545222    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:02.560027    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:02.560036    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:02.577668    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:02.577677    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:02.589010    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:02.589023    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:02.600696    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:02.600706    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:02.623872    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:02.623879    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:02.659940    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:02.659950    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:02.664196    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:02.664202    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:02.697784    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:02.697797    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:02.712173    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:02.712183    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:04.193334    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:04.193448    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:04.207016    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:04.207100    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:04.220029    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:04.220100    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:04.232169    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:04.232244    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:04.243567    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:04.243647    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:04.255482    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:04.255557    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:04.275162    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:04.275234    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:04.286847    9387 logs.go:276] 0 containers: []
	W0327 11:15:04.286859    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:04.286923    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:04.298511    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:04.298529    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:04.298534    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:04.312017    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:04.312028    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:04.323987    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:04.323999    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:04.346266    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:04.346286    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:04.361001    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:04.361013    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:04.373041    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:04.373056    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:04.385190    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:04.385201    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:04.389830    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:04.389837    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:04.410716    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:04.410726    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:04.425158    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:04.425169    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:04.436122    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:04.436133    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:04.451251    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:04.451261    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:04.468765    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:04.468776    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:04.483270    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:04.483280    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:04.520874    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:04.520887    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:04.556340    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:04.556351    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:04.594014    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:04.594027    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:05.225701    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:07.119097    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:10.228029    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:10.228260    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:10.245408    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:10.245492    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:10.259465    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:10.259529    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:10.270961    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:10.271023    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:10.285052    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:10.285112    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:10.295796    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:10.295853    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:10.306856    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:10.306915    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:10.317264    9149 logs.go:276] 0 containers: []
	W0327 11:15:10.317274    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:10.317323    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:10.328362    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:10.328379    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:10.328384    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:10.365015    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:10.365025    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:10.388118    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:10.388125    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:10.406525    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:10.406535    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:10.422285    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:10.422295    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:10.433837    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:10.433846    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:10.445810    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:10.445820    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:10.458064    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:10.458074    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:10.475199    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:10.475209    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:10.515313    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:10.515325    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:10.530288    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:10.530299    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:10.541249    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:10.541259    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:10.552686    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:10.552695    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:10.557516    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:10.557523    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:10.571806    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:10.571814    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:12.121322    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:12.121525    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:12.138576    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:12.138662    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:12.150525    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:12.150593    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:12.165523    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:12.165585    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:12.176509    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:12.176574    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:12.186556    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:12.186621    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:12.196674    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:12.196738    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:12.207073    9387 logs.go:276] 0 containers: []
	W0327 11:15:12.207084    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:12.207137    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:12.217344    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:12.217360    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:12.217365    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:12.229093    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:12.229105    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:12.240807    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:12.240819    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:12.264604    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:12.264614    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:12.300347    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:12.300361    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:12.314321    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:12.314332    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:12.328624    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:12.328633    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:12.342496    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:12.342511    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:12.357729    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:12.357738    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:12.362208    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:12.362216    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:12.377756    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:12.377769    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:12.391473    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:12.391483    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:12.403093    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:12.403103    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:12.439845    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:12.439857    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:12.451310    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:12.451320    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:12.468415    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:12.468426    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:12.479102    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:12.479113    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:15.023513    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:13.087909    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:20.026054    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:20.026254    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:20.042972    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:20.043060    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:20.056269    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:20.056352    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:20.067286    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:20.067355    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:20.078744    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:20.078817    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:20.092123    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:20.092191    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:20.102767    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:20.102826    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:20.112813    9387 logs.go:276] 0 containers: []
	W0327 11:15:20.112825    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:20.112881    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:20.127988    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:20.128008    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:20.128015    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:20.163894    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:20.163904    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:20.178131    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:20.178141    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:20.195494    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:20.195505    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:20.213541    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:20.213552    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:20.231865    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:20.231875    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:20.261016    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:20.261038    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:20.278565    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:20.278577    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:20.294622    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:20.294632    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:20.306127    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:20.306140    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:20.318439    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:20.318450    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:20.329500    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:20.329512    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:20.343157    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:20.343168    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:20.366400    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:20.366408    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:20.403357    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:20.403363    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:20.414944    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:20.414954    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:20.419093    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:20.419103    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:18.090455    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:18.090721    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:18.118380    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:18.118504    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:18.135461    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:18.135552    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:18.149508    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:18.149575    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:18.160739    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:18.160803    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:18.170576    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:18.170643    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:18.181017    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:18.181080    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:18.191323    9149 logs.go:276] 0 containers: []
	W0327 11:15:18.191336    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:18.191391    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:18.201969    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:18.201989    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:18.201994    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:18.216680    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:18.216691    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:18.240632    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:18.240641    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:18.255438    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:18.255447    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:18.266612    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:18.266625    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:18.278201    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:18.278217    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:18.290212    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:18.290223    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:18.304373    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:18.304383    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:18.342656    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:18.342669    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:18.347006    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:18.347011    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:18.381183    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:18.381196    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:18.398272    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:18.398283    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:18.413448    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:18.413460    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:18.430546    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:18.430555    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:18.445039    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:18.445048    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:20.961633    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:22.958846    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:25.964162    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:25.964514    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:25.995398    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:25.995522    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:26.014199    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:26.014289    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:26.028277    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:26.028348    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:26.044162    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:26.044235    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:26.054113    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:26.054174    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:26.068089    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:26.068150    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:26.077802    9149 logs.go:276] 0 containers: []
	W0327 11:15:26.077811    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:26.077859    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:26.088579    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:26.088592    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:26.088597    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:26.115014    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:26.115024    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:26.126188    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:26.126199    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:26.130548    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:26.130555    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:26.142075    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:26.142086    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:26.156693    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:26.156703    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:26.170600    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:26.170608    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:26.182265    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:26.182275    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:26.193541    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:26.193551    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:26.227744    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:26.227755    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:26.239376    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:26.239386    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:26.251032    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:26.251043    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:26.269842    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:26.269852    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:26.281842    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:26.281851    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:26.318696    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:26.318707    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:27.960328    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:27.960698    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:27.992291    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:27.992415    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:28.011407    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:28.011498    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:28.025661    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:28.025738    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:28.037395    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:28.037464    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:28.048456    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:28.048523    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:28.059112    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:28.059183    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:28.069228    9387 logs.go:276] 0 containers: []
	W0327 11:15:28.069240    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:28.069300    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:28.080236    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:28.080255    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:28.080260    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:28.094755    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:28.094771    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:28.109759    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:28.109770    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:28.121181    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:28.121193    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:28.133842    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:28.133852    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:28.148904    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:28.148915    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:28.160265    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:28.160276    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:28.197176    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:28.197185    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:28.231999    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:28.232009    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:28.254998    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:28.255004    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:28.266915    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:28.266925    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:28.283969    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:28.283979    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:28.322625    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:28.322636    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:28.337861    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:28.337872    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:28.351158    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:28.351173    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:28.365986    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:28.365998    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:28.377178    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:28.377188    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:30.883679    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:28.835311    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:35.884458    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:35.884699    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:35.905245    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:35.905338    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:35.920199    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:35.920279    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:35.932679    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:35.932742    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:35.943160    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:35.943234    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:35.953354    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:35.953426    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:35.964023    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:35.964090    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:35.978100    9387 logs.go:276] 0 containers: []
	W0327 11:15:35.978111    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:35.978164    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:35.988783    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:35.988801    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:35.988809    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:35.993379    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:35.993385    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:36.008804    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:36.008813    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:36.023116    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:36.023127    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:36.036329    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:36.036341    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:36.054819    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:36.054829    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:36.091427    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:36.091441    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:36.105617    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:36.105631    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:36.117882    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:36.117895    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:36.135784    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:36.135798    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:36.157587    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:36.157594    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:36.194179    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:36.194193    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:36.208247    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:36.208262    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:36.222755    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:36.222770    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:36.240269    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:36.240279    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:36.251437    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:36.251446    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:36.288140    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:36.288150    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:33.837635    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:33.838050    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:33.867953    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:33.868067    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:33.886572    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:33.886649    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:33.900878    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:33.900954    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:33.914071    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:33.914138    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:33.925209    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:33.925269    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:33.935824    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:33.935892    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:33.946490    9149 logs.go:276] 0 containers: []
	W0327 11:15:33.946500    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:33.946548    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:33.962821    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:33.962840    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:33.962845    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:33.985939    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:33.985946    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:34.019620    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:34.019631    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:34.033855    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:34.033868    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:34.048060    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:34.048072    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:34.059234    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:34.059244    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:34.063684    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:34.063690    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:34.077156    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:34.077166    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:34.092750    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:34.092760    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:34.104920    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:34.104930    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:34.116325    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:34.116336    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:34.128552    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:34.128562    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:34.145825    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:34.145836    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:34.157427    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:34.157438    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:34.193633    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:34.193646    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:36.709460    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:38.802092    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:41.711671    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:41.711813    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:41.723382    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:41.723446    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:41.734013    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:41.734092    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:41.744712    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:41.744776    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:41.755336    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:41.755398    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:41.765533    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:41.765596    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:41.776507    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:41.776572    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:41.787100    9149 logs.go:276] 0 containers: []
	W0327 11:15:41.787110    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:41.787166    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:41.797918    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:41.797934    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:41.797942    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:41.809864    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:41.809876    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:41.846002    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:41.846014    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:41.857763    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:41.857773    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:41.881790    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:41.881802    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:41.920516    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:41.920532    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:41.935261    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:41.935271    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:41.946719    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:41.946730    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:41.951038    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:41.951044    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:41.969496    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:41.969506    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:41.980855    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:41.980865    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:41.992726    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:41.992737    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:42.014659    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:42.014669    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:42.026427    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:42.026438    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:42.038448    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:42.038458    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:43.803192    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:43.803485    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:43.821152    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:43.821240    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:43.834375    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:43.834446    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:43.845073    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:43.845138    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:43.855532    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:43.855596    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:43.867405    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:43.867468    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:43.877649    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:43.877715    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:43.887962    9387 logs.go:276] 0 containers: []
	W0327 11:15:43.887975    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:43.888037    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:43.898373    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:43.898393    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:43.898398    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:43.912810    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:43.912824    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:43.925936    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:43.925951    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:43.937562    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:43.937575    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:43.941693    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:43.941699    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:43.975029    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:43.975044    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:43.992214    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:43.992228    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:44.003996    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:44.004006    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:44.040975    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:44.040984    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:44.052346    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:44.052357    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:44.074892    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:44.074899    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:44.087286    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:44.087297    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:44.128623    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:44.128633    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:44.140845    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:44.140859    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:44.156013    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:44.156028    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:44.173858    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:44.173868    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:44.190299    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:44.190309    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:44.555025    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:46.712360    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:49.557250    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:49.557436    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:49.573897    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:49.573973    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:49.588229    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:49.588295    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:49.599819    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:49.599893    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:49.611039    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:49.611109    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:49.622147    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:49.622212    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:49.632908    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:49.632971    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:49.644236    9149 logs.go:276] 0 containers: []
	W0327 11:15:49.644247    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:49.644298    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:49.654857    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:49.654874    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:49.654879    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:49.669075    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:49.669084    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:49.680891    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:49.680900    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:49.695763    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:49.695778    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:49.719027    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:49.719036    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:49.755395    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:49.755406    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:49.769709    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:49.769719    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:49.781262    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:49.781272    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:49.793999    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:49.794009    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:49.810782    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:49.810797    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:49.825933    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:49.825943    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:49.863949    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:49.863957    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:49.868718    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:49.868724    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:49.880948    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:49.880957    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:49.899260    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:49.899274    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:52.415979    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:51.714704    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:51.714883    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:51.729752    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:51.729827    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:51.741908    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:51.741972    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:51.752297    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:51.752368    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:51.762788    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:51.762853    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:51.773104    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:51.773175    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:51.786386    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:51.786462    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:51.797481    9387 logs.go:276] 0 containers: []
	W0327 11:15:51.797492    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:51.797549    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:51.808776    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:51.808794    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:51.808801    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:51.823609    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:51.823620    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:51.838846    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:51.838860    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:51.852791    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:51.852802    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:51.890341    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:51.890349    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:51.907819    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:51.907830    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:51.930578    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:51.930593    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:51.949148    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:51.949160    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:51.953712    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:51.953718    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:51.987927    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:51.987941    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:52.026272    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:52.026284    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:52.044646    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:52.044661    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:52.062396    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:52.062407    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:52.081752    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:52.081765    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:52.096610    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:52.096622    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:52.108263    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:52.108273    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:52.120433    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:52.120443    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:54.632362    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:57.418389    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:57.418762    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:57.447553    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:15:57.447679    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:57.466603    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:15:57.466700    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:57.480135    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:15:57.480206    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:57.494460    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:15:57.494528    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:57.504918    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:15:57.504986    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:57.515762    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:15:57.515818    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:57.530097    9149 logs.go:276] 0 containers: []
	W0327 11:15:57.530107    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:57.530154    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:57.540289    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:15:57.540309    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:57.540313    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:57.583226    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:15:57.583237    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:15:57.595756    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:15:57.595768    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:15:57.611466    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:15:57.611481    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:15:57.626359    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:15:57.626370    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:15:57.647263    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:15:57.647274    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:15:57.663221    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:15:57.663235    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:15:57.680256    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:15:57.680271    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:15:57.691917    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:15:57.691927    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:15:57.703153    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:57.703163    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:57.708346    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:15:57.708357    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:15:57.722994    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:57.723005    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:57.760860    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:15:57.760869    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:15:57.778943    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:57.778952    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:57.804580    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:15:57.804592    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:59.634577    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:59.634662    9387 kubeadm.go:591] duration metric: took 4m4.099147625s to restartPrimaryControlPlane
	W0327 11:15:59.634716    9387 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 11:15:59.634736    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 11:16:00.657016    9387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.022273458s)
	I0327 11:16:00.657094    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 11:16:00.662069    9387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 11:16:00.664758    9387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 11:16:00.667625    9387 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 11:16:00.667631    9387 kubeadm.go:156] found existing configuration files:
	
	I0327 11:16:00.667664    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/admin.conf
	I0327 11:16:00.670227    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 11:16:00.670248    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 11:16:00.672798    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/kubelet.conf
	I0327 11:16:00.675885    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 11:16:00.675911    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 11:16:00.679231    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/controller-manager.conf
	I0327 11:16:00.681776    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 11:16:00.681800    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 11:16:00.684323    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/scheduler.conf
	I0327 11:16:00.687451    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 11:16:00.687475    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 11:16:00.690836    9387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 11:16:00.708783    9387 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 11:16:00.708817    9387 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 11:16:00.756972    9387 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 11:16:00.757026    9387 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 11:16:00.757070    9387 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 11:16:00.807434    9387 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 11:16:00.812630    9387 out.go:204]   - Generating certificates and keys ...
	I0327 11:16:00.812666    9387 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 11:16:00.812707    9387 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 11:16:00.812754    9387 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 11:16:00.812787    9387 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 11:16:00.812826    9387 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 11:16:00.812852    9387 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 11:16:00.812888    9387 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 11:16:00.812923    9387 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 11:16:00.812963    9387 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 11:16:00.813010    9387 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 11:16:00.813031    9387 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 11:16:00.813065    9387 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 11:16:00.972370    9387 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 11:16:01.194276    9387 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 11:16:01.240712    9387 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 11:16:01.327050    9387 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 11:16:01.354652    9387 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 11:16:01.355059    9387 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 11:16:01.355097    9387 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 11:16:01.443156    9387 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 11:16:00.316385    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:01.450221    9387 out.go:204]   - Booting up control plane ...
	I0327 11:16:01.450276    9387 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 11:16:01.450313    9387 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 11:16:01.450349    9387 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 11:16:01.450393    9387 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 11:16:01.450473    9387 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 11:16:05.953095    9387 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503668 seconds
	I0327 11:16:05.953366    9387 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 11:16:05.958885    9387 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 11:16:06.468319    9387 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 11:16:06.468427    9387 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-042000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 11:16:06.972666    9387 kubeadm.go:309] [bootstrap-token] Using token: r3uoc4.gdgrt46bsfuxq20p
	I0327 11:16:06.975287    9387 out.go:204]   - Configuring RBAC rules ...
	I0327 11:16:06.975340    9387 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 11:16:06.975380    9387 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 11:16:06.982635    9387 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 11:16:06.983484    9387 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 11:16:06.984358    9387 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 11:16:06.985210    9387 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 11:16:06.988348    9387 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 11:16:07.136529    9387 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 11:16:07.376507    9387 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 11:16:07.377007    9387 kubeadm.go:309] 
	I0327 11:16:07.377039    9387 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 11:16:07.377041    9387 kubeadm.go:309] 
	I0327 11:16:07.377076    9387 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 11:16:07.377078    9387 kubeadm.go:309] 
	I0327 11:16:07.377137    9387 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 11:16:07.377177    9387 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 11:16:07.377229    9387 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 11:16:07.377232    9387 kubeadm.go:309] 
	I0327 11:16:07.377257    9387 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 11:16:07.377260    9387 kubeadm.go:309] 
	I0327 11:16:07.377286    9387 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 11:16:07.377289    9387 kubeadm.go:309] 
	I0327 11:16:07.377315    9387 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 11:16:07.377361    9387 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 11:16:07.377398    9387 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 11:16:07.377403    9387 kubeadm.go:309] 
	I0327 11:16:07.377451    9387 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 11:16:07.377502    9387 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 11:16:07.377508    9387 kubeadm.go:309] 
	I0327 11:16:07.377549    9387 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r3uoc4.gdgrt46bsfuxq20p \
	I0327 11:16:07.377605    9387 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:45ef07f6bc58c462e2dd74e42d978dd37f899f74c2cb02d53ec43ace0e53a157 \
	I0327 11:16:07.377620    9387 kubeadm.go:309] 	--control-plane 
	I0327 11:16:07.377624    9387 kubeadm.go:309] 
	I0327 11:16:07.377669    9387 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 11:16:07.377673    9387 kubeadm.go:309] 
	I0327 11:16:07.377717    9387 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r3uoc4.gdgrt46bsfuxq20p \
	I0327 11:16:07.377782    9387 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:45ef07f6bc58c462e2dd74e42d978dd37f899f74c2cb02d53ec43ace0e53a157 
	I0327 11:16:07.378007    9387 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 11:16:07.378015    9387 cni.go:84] Creating CNI manager for ""
	I0327 11:16:07.378024    9387 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:16:07.384458    9387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 11:16:07.388605    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 11:16:07.391525    9387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 11:16:07.397357    9387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 11:16:07.397447    9387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 11:16:07.397483    9387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-042000 minikube.k8s.io/updated_at=2024_03_27T11_16_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28 minikube.k8s.io/name=stopped-upgrade-042000 minikube.k8s.io/primary=true
	I0327 11:16:07.400569    9387 ops.go:34] apiserver oom_adj: -16
	I0327 11:16:07.446947    9387 kubeadm.go:1107] duration metric: took 49.541542ms to wait for elevateKubeSystemPrivileges
	W0327 11:16:07.446973    9387 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 11:16:07.446982    9387 kubeadm.go:393] duration metric: took 4m11.925216041s to StartCluster
	I0327 11:16:07.446992    9387 settings.go:142] acquiring lock: {Name:mkcc43d09d3f5881bfa353312da863c5f99807b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:16:07.447076    9387 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:16:07.447501    9387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/kubeconfig: {Name:mkbb2a293ed6f55faa985236f49efd479958575e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:16:07.447712    9387 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:16:07.451485    9387 out.go:177] * Verifying Kubernetes components...
	I0327 11:16:07.447720    9387 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 11:16:07.447792    9387 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:16:07.458373    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:16:07.458387    9387 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-042000"
	I0327 11:16:07.458401    9387 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-042000"
	W0327 11:16:07.458404    9387 addons.go:243] addon storage-provisioner should already be in state true
	I0327 11:16:07.458418    9387 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-042000"
	I0327 11:16:07.458421    9387 host.go:66] Checking if "stopped-upgrade-042000" exists ...
	I0327 11:16:07.458429    9387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-042000"
	I0327 11:16:07.459926    9387 kapi.go:59] client config for stopped-upgrade-042000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.key", CAFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060d6b90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 11:16:07.460042    9387 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-042000"
	W0327 11:16:07.460047    9387 addons.go:243] addon default-storageclass should already be in state true
	I0327 11:16:07.460055    9387 host.go:66] Checking if "stopped-upgrade-042000" exists ...
	I0327 11:16:07.464328    9387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:16:05.318565    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:05.318729    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:05.337457    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:05.337539    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:05.351601    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:05.351677    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:05.372034    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:05.372110    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:05.393502    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:05.393578    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:05.409801    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:05.409877    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:05.422102    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:05.422175    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:05.434106    9149 logs.go:276] 0 containers: []
	W0327 11:16:05.434117    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:05.434180    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:05.446757    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:05.446774    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:05.446779    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:05.462734    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:05.462747    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:05.475451    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:05.475465    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:05.515742    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:05.515761    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:05.520405    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:05.520414    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:05.534268    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:05.534280    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:05.545733    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:05.545744    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:05.557611    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:05.557625    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:05.570439    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:05.570451    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:05.584914    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:05.584928    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:05.607040    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:05.607051    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:05.619364    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:05.619377    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:05.656594    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:05.656608    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:05.668420    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:05.668432    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:05.680355    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:05.680367    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:07.467490    9387 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 11:16:07.467498    9387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 11:16:07.467505    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:16:07.468140    9387 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 11:16:07.468145    9387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 11:16:07.468148    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:16:07.548967    9387 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 11:16:07.554167    9387 api_server.go:52] waiting for apiserver process to appear ...
	I0327 11:16:07.554210    9387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:16:07.558097    9387 api_server.go:72] duration metric: took 110.372334ms to wait for apiserver process to appear ...
	I0327 11:16:07.558105    9387 api_server.go:88] waiting for apiserver healthz status ...
	I0327 11:16:07.558112    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:07.616938    9387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 11:16:07.617811    9387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 11:16:08.207272    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:12.560228    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:12.560271    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:13.208633    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:13.208944    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:13.242069    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:13.242192    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:13.269473    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:13.269552    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:13.282666    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:13.282740    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:13.294406    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:13.294480    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:13.305636    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:13.305703    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:13.316281    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:13.316346    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:13.326907    9149 logs.go:276] 0 containers: []
	W0327 11:16:13.326918    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:13.326986    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:13.337402    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:13.337417    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:13.337422    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:13.374109    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:13.374118    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:13.385822    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:13.385833    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:13.400980    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:13.400990    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:13.413003    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:13.413016    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:13.427984    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:13.427994    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:13.439188    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:13.439200    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:13.451834    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:13.451844    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:13.469100    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:13.469110    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:13.480555    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:13.480565    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:13.499045    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:13.499055    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:13.523546    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:13.523555    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:13.561500    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:13.561511    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:13.565694    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:13.565702    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:13.580437    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:13.580448    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:16.102350    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:17.560612    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:17.560657    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:21.104606    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:21.104779    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:21.119980    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:21.120052    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:21.131682    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:21.131745    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:21.142603    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:21.142674    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:21.160242    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:21.160332    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:21.172106    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:21.172198    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:21.184402    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:21.184489    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:21.196937    9149 logs.go:276] 0 containers: []
	W0327 11:16:21.196947    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:21.197030    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:21.208028    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:21.208047    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:21.208053    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:21.247463    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:21.247483    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:21.259881    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:21.259892    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:21.273020    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:21.273032    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:21.293953    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:21.293976    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:21.309882    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:21.309895    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:21.325127    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:21.325139    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:21.340644    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:21.340654    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:21.353230    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:21.353242    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:21.364983    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:21.364994    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:21.402718    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:21.402729    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:21.407511    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:21.407519    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:21.422301    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:21.422319    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:21.434349    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:21.434359    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:21.446254    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:21.446265    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:22.560990    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:22.561030    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:23.972298    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:27.561475    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:27.561505    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:28.974692    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:28.974809    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:28.988466    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:28.988528    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:28.999456    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:28.999519    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:29.010094    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:29.010155    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:29.020754    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:29.020816    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:29.031484    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:29.031546    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:29.042478    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:29.042541    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:29.053305    9149 logs.go:276] 0 containers: []
	W0327 11:16:29.053315    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:29.053365    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:29.064042    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:29.064058    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:29.064063    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:29.080343    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:29.080354    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:29.098389    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:29.098401    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:29.109973    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:29.109986    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:29.122683    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:29.122696    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:29.159264    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:29.159278    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:29.173736    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:29.173748    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:29.185041    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:29.185052    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:29.221415    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:29.221423    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:29.233923    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:29.233933    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:29.246358    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:29.246369    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:29.258887    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:29.258898    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:29.270681    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:29.270691    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:29.285119    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:29.285131    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:29.309059    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:29.309068    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:31.815363    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:32.562117    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:32.562161    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:36.817598    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:36.817812    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:36.835694    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:36.835781    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:36.848159    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:36.848230    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:36.859520    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:36.859591    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:36.869976    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:36.870041    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:36.880431    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:36.880500    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:36.891166    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:36.891231    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:36.901459    9149 logs.go:276] 0 containers: []
	W0327 11:16:36.901472    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:36.901528    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:36.914303    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:36.914320    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:36.914324    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:36.952996    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:36.953005    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:36.967124    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:36.967137    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:36.982101    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:36.982115    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:36.996652    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:36.996664    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:37.008154    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:37.008164    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:37.019627    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:37.019637    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:37.031210    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:37.031220    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:37.035616    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:37.035624    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:37.046721    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:37.046735    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:37.070618    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:37.070625    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:37.105183    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:37.105193    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:37.116687    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:37.116698    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:37.132141    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:37.132151    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:37.149327    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:37.149341    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:37.562941    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:37.562984    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 11:16:37.979486    9387 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 11:16:37.983896    9387 out.go:177] * Enabled addons: storage-provisioner
	I0327 11:16:37.994889    9387 addons.go:505] duration metric: took 30.547374833s for enable addons: enabled=[storage-provisioner]
	I0327 11:16:39.664967    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:42.563946    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:42.563972    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:44.667464    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:44.667635    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:16:44.679003    9149 logs.go:276] 1 containers: [be48d14bfc2d]
	I0327 11:16:44.679074    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:16:44.689815    9149 logs.go:276] 1 containers: [1535ca6499ca]
	I0327 11:16:44.689876    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:16:44.701832    9149 logs.go:276] 4 containers: [82d8446541db 5c4e7cb35cd7 7aae4d9bf2e6 110c0cc41345]
	I0327 11:16:44.701898    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:16:44.712351    9149 logs.go:276] 1 containers: [cf4c511a035a]
	I0327 11:16:44.712415    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:16:44.722675    9149 logs.go:276] 1 containers: [88138884c699]
	I0327 11:16:44.722728    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:16:44.733453    9149 logs.go:276] 1 containers: [fcec42cf80b3]
	I0327 11:16:44.733517    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:16:44.743452    9149 logs.go:276] 0 containers: []
	W0327 11:16:44.743466    9149 logs.go:278] No container was found matching "kindnet"
	I0327 11:16:44.743544    9149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:16:44.754149    9149 logs.go:276] 1 containers: [a5f7f8972997]
	I0327 11:16:44.754167    9149 logs.go:123] Gathering logs for storage-provisioner [a5f7f8972997] ...
	I0327 11:16:44.754172    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f7f8972997"
	I0327 11:16:44.767294    9149 logs.go:123] Gathering logs for etcd [1535ca6499ca] ...
	I0327 11:16:44.767304    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1535ca6499ca"
	I0327 11:16:44.781435    9149 logs.go:123] Gathering logs for coredns [7aae4d9bf2e6] ...
	I0327 11:16:44.781446    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aae4d9bf2e6"
	I0327 11:16:44.793300    9149 logs.go:123] Gathering logs for container status ...
	I0327 11:16:44.793310    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:16:44.805162    9149 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:16:44.805174    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:16:44.842032    9149 logs.go:123] Gathering logs for kube-controller-manager [fcec42cf80b3] ...
	I0327 11:16:44.842042    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcec42cf80b3"
	I0327 11:16:44.859380    9149 logs.go:123] Gathering logs for coredns [82d8446541db] ...
	I0327 11:16:44.859392    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d8446541db"
	I0327 11:16:44.871556    9149 logs.go:123] Gathering logs for kubelet ...
	I0327 11:16:44.871566    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:16:44.908292    9149 logs.go:123] Gathering logs for dmesg ...
	I0327 11:16:44.908303    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:16:44.912550    9149 logs.go:123] Gathering logs for coredns [110c0cc41345] ...
	I0327 11:16:44.912556    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110c0cc41345"
	I0327 11:16:44.924362    9149 logs.go:123] Gathering logs for kube-scheduler [cf4c511a035a] ...
	I0327 11:16:44.924373    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4c511a035a"
	I0327 11:16:44.939301    9149 logs.go:123] Gathering logs for kube-proxy [88138884c699] ...
	I0327 11:16:44.939310    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88138884c699"
	I0327 11:16:44.951453    9149 logs.go:123] Gathering logs for Docker ...
	I0327 11:16:44.951463    9149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:16:44.975590    9149 logs.go:123] Gathering logs for kube-apiserver [be48d14bfc2d] ...
	I0327 11:16:44.975598    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be48d14bfc2d"
	I0327 11:16:44.992689    9149 logs.go:123] Gathering logs for coredns [5c4e7cb35cd7] ...
	I0327 11:16:44.992699    9149 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4e7cb35cd7"
	I0327 11:16:47.506366    9149 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:47.565176    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:47.565196    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:52.508643    9149 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:52.513407    9149 out.go:177] 
	W0327 11:16:52.516263    9149 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0327 11:16:52.516276    9149 out.go:239] * 
	W0327 11:16:52.517095    9149 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:16:52.526203    9149 out.go:177] 
	I0327 11:16:52.566496    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:52.566525    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:57.568390    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:57.568440    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:17:02.570636    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:17:02.570680    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-03-27 18:07:56 UTC, ends at Wed 2024-03-27 18:17:08 UTC. --
	Mar 27 18:16:54 running-upgrade-599000 dockerd[2914]: time="2024-03-27T18:16:54.253871002Z" level=warning msg="cleanup warnings time=\"2024-03-27T18:16:54Z\" level=info msg=\"starting signal loop\" namespace=moby pid=18011 runtime=io.containerd.runc.v2\n"
	Mar 27 18:16:54 running-upgrade-599000 dockerd[2914]: time="2024-03-27T18:16:54.293654764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 27 18:16:54 running-upgrade-599000 dockerd[2914]: time="2024-03-27T18:16:54.293685471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 27 18:16:54 running-upgrade-599000 dockerd[2914]: time="2024-03-27T18:16:54.293692846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 18:16:54 running-upgrade-599000 dockerd[2914]: time="2024-03-27T18:16:54.293864544Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/662a4d5d7ed57d72bced72496d9cb4e7fb6217010512c2decce2f7ed8685b9c0 pid=18031 runtime=io.containerd.runc.v2
	Mar 27 18:16:55 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:16:55Z" level=error msg="ContainerStats resp: {0x40009a3cc0 linux}"
	Mar 27 18:16:55 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:16:55Z" level=error msg="ContainerStats resp: {0x40009a3e80 linux}"
	Mar 27 18:16:55 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:16:55Z" level=error msg="ContainerStats resp: {0x40006c0000 linux}"
	Mar 27 18:16:55 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:16:55Z" level=error msg="ContainerStats resp: {0x40006c0b80 linux}"
	Mar 27 18:16:55 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:16:55Z" level=error msg="ContainerStats resp: {0x40006c1080 linux}"
	Mar 27 18:16:55 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:16:55Z" level=error msg="ContainerStats resp: {0x40006c17c0 linux}"
	Mar 27 18:16:55 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:16:55Z" level=error msg="ContainerStats resp: {0x400098f540 linux}"
	Mar 27 18:16:57 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:16:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 27 18:17:02 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 27 18:17:05 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:05Z" level=error msg="ContainerStats resp: {0x40008c2980 linux}"
	Mar 27 18:17:05 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:05Z" level=error msg="ContainerStats resp: {0x40006c1640 linux}"
	Mar 27 18:17:06 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:06Z" level=error msg="ContainerStats resp: {0x40008046c0 linux}"
	Mar 27 18:17:07 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:07Z" level=error msg="ContainerStats resp: {0x4000805580 linux}"
	Mar 27 18:17:07 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:07Z" level=error msg="ContainerStats resp: {0x40004e5a40 linux}"
	Mar 27 18:17:07 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:07Z" level=error msg="ContainerStats resp: {0x4000804080 linux}"
	Mar 27 18:17:07 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:07Z" level=error msg="ContainerStats resp: {0x4000804200 linux}"
	Mar 27 18:17:07 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:07Z" level=error msg="ContainerStats resp: {0x40004e5240 linux}"
	Mar 27 18:17:07 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:07Z" level=error msg="ContainerStats resp: {0x4000804d40 linux}"
	Mar 27 18:17:07 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:07Z" level=error msg="ContainerStats resp: {0x40008058c0 linux}"
	Mar 27 18:17:07 running-upgrade-599000 cri-dockerd[2758]: time="2024-03-27T18:17:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	662a4d5d7ed57       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   80a02575fcbb0
	9aa9264f8b414       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   abdea1af7b499
	82d8446541db1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   80a02575fcbb0
	5c4e7cb35cd7e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   abdea1af7b499
	88138884c6999       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   be2780fc81f16
	a5f7f8972997f       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   b3dfc2597e910
	fcec42cf80b3a       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   4822d8455a3a4
	cf4c511a035a1       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   c53faa14de1ea
	be48d14bfc2d8       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   e70ecc57b0902
	1535ca6499ca4       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   f887328a92b60
	
	
	==> coredns [5c4e7cb35cd7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:40142->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:54966->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:43849->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:57213->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:40026->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:55483->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:59939->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:35853->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:51610->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4885949387522276798.7849035329465132929. HINFO: read udp 10.244.0.2:48956->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [662a4d5d7ed5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8387728713207538747.1781508942588214255. HINFO: read udp 10.244.0.3:49572->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8387728713207538747.1781508942588214255. HINFO: read udp 10.244.0.3:35103->10.0.2.3:53: i/o timeout
	
	
	==> coredns [82d8446541db] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:40733->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:51362->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:47953->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:44357->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:53858->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:40695->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:49633->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:36126->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:53999->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3205939301759890517.4302514250303106575. HINFO: read udp 10.244.0.3:42864->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9aa9264f8b41] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8828626237143865383.4971473346897276462. HINFO: read udp 10.244.0.2:53142->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8828626237143865383.4971473346897276462. HINFO: read udp 10.244.0.2:57115->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-599000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-599000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28
	                    minikube.k8s.io/name=running-upgrade-599000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T11_12_51_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 18:12:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-599000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 18:17:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 18:12:51 +0000   Wed, 27 Mar 2024 18:12:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 18:12:51 +0000   Wed, 27 Mar 2024 18:12:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 18:12:51 +0000   Wed, 27 Mar 2024 18:12:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 18:12:51 +0000   Wed, 27 Mar 2024 18:12:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-599000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 abb0ab2b529042f29bf0468584789735
	  System UUID:                abb0ab2b529042f29bf0468584789735
	  Boot ID:                    f04dc299-6e04-4bc7-8c60-48f1caed765e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-c9hd8                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m2s
	  kube-system                 coredns-6d4b75cb6d-xft6j                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m2s
	  kube-system                 etcd-running-upgrade-599000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-599000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-599000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-proxy-fdz2q                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-599000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-599000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-599000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-599000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-599000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-599000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-599000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-599000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m3s                   node-controller  Node running-upgrade-599000 event: Registered Node running-upgrade-599000 in Controller
	
	
	==> dmesg <==
	[  +1.790125] systemd-fstab-generator[873]: Ignoring "noauto" for root device
	[  +0.085973] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[  +0.085789] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +1.137360] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.088921] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.080784] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +2.428760] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[ +14.162624] systemd-fstab-generator[1951]: Ignoring "noauto" for root device
	[  +2.866533] systemd-fstab-generator[2230]: Ignoring "noauto" for root device
	[  +0.145414] systemd-fstab-generator[2263]: Ignoring "noauto" for root device
	[  +0.088780] systemd-fstab-generator[2274]: Ignoring "noauto" for root device
	[  +0.095950] systemd-fstab-generator[2287]: Ignoring "noauto" for root device
	[  +2.277536] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.145634] systemd-fstab-generator[2715]: Ignoring "noauto" for root device
	[  +0.088081] systemd-fstab-generator[2726]: Ignoring "noauto" for root device
	[  +0.081725] systemd-fstab-generator[2737]: Ignoring "noauto" for root device
	[  +0.093124] systemd-fstab-generator[2751]: Ignoring "noauto" for root device
	[  +2.197041] systemd-fstab-generator[2901]: Ignoring "noauto" for root device
	[  +5.389903] systemd-fstab-generator[3304]: Ignoring "noauto" for root device
	[  +1.312775] systemd-fstab-generator[3478]: Ignoring "noauto" for root device
	[ +19.028764] kauditd_printk_skb: 68 callbacks suppressed
	[Mar27 18:12] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.355926] systemd-fstab-generator[11280]: Ignoring "noauto" for root device
	[  +5.631682] systemd-fstab-generator[11883]: Ignoring "noauto" for root device
	[  +0.468632] systemd-fstab-generator[12019]: Ignoring "noauto" for root device
	
	
	==> etcd [1535ca6499ca] <==
	{"level":"info","ts":"2024-03-27T18:12:46.792Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-27T18:12:46.792Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-27T18:12:46.793Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-27T18:12:46.793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-27T18:12:46.793Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-27T18:12:46.793Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-27T18:12:46.793Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-27T18:12:47.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-27T18:12:47.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-27T18:12:47.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-27T18:12:47.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-27T18:12:47.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-27T18:12:47.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-27T18:12:47.787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-27T18:12:47.787Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T18:12:47.788Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T18:12:47.788Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T18:12:47.788Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T18:12:47.788Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-599000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T18:12:47.789Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T18:12:47.789Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T18:12:47.790Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-27T18:12:47.791Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-27T18:12:47.791Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T18:12:47.791Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:17:08 up 9 min,  0 users,  load average: 0.31, 0.34, 0.18
	Linux running-upgrade-599000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [be48d14bfc2d] <==
	I0327 18:12:49.021099       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0327 18:12:49.041924       1 cache.go:39] Caches are synced for autoregister controller
	I0327 18:12:49.042330       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0327 18:12:49.042354       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0327 18:12:49.042491       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0327 18:12:49.042779       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0327 18:12:49.042828       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0327 18:12:49.776231       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0327 18:12:49.950022       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0327 18:12:49.954546       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0327 18:12:49.954689       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0327 18:12:50.106777       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0327 18:12:50.116581       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0327 18:12:50.219288       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0327 18:12:50.221230       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0327 18:12:50.221609       1 controller.go:611] quota admission added evaluator for: endpoints
	I0327 18:12:50.223095       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0327 18:12:51.089794       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0327 18:12:51.487637       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0327 18:12:51.490938       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0327 18:12:51.497096       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0327 18:12:51.545249       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0327 18:13:05.847621       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0327 18:13:05.896883       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0327 18:13:06.344617       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [fcec42cf80b3] <==
	I0327 18:13:05.145664       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0327 18:13:05.145673       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0327 18:13:05.145700       1 shared_informer.go:262] Caches are synced for HPA
	I0327 18:13:05.146779       1 shared_informer.go:262] Caches are synced for deployment
	I0327 18:13:05.146848       1 shared_informer.go:262] Caches are synced for PV protection
	I0327 18:13:05.151742       1 shared_informer.go:262] Caches are synced for namespace
	I0327 18:13:05.195412       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0327 18:13:05.196607       1 shared_informer.go:262] Caches are synced for endpoint
	I0327 18:13:05.212643       1 shared_informer.go:262] Caches are synced for expand
	I0327 18:13:05.227404       1 shared_informer.go:262] Caches are synced for persistent volume
	I0327 18:13:05.245794       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0327 18:13:05.245796       1 shared_informer.go:262] Caches are synced for ephemeral
	I0327 18:13:05.246885       1 shared_informer.go:262] Caches are synced for attach detach
	I0327 18:13:05.255254       1 shared_informer.go:262] Caches are synced for resource quota
	I0327 18:13:05.295926       1 shared_informer.go:262] Caches are synced for stateful set
	I0327 18:13:05.295944       1 shared_informer.go:262] Caches are synced for PVC protection
	I0327 18:13:05.347285       1 shared_informer.go:262] Caches are synced for resource quota
	I0327 18:13:05.391418       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0327 18:13:05.760151       1 shared_informer.go:262] Caches are synced for garbage collector
	I0327 18:13:05.845850       1 shared_informer.go:262] Caches are synced for garbage collector
	I0327 18:13:05.845900       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0327 18:13:05.850758       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fdz2q"
	I0327 18:13:05.897899       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0327 18:13:06.148497       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-xft6j"
	I0327 18:13:06.153465       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-c9hd8"
	
	
	==> kube-proxy [88138884c699] <==
	I0327 18:13:06.333645       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0327 18:13:06.333756       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0327 18:13:06.333773       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0327 18:13:06.342809       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0327 18:13:06.342819       1 server_others.go:206] "Using iptables Proxier"
	I0327 18:13:06.342828       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0327 18:13:06.342971       1 server.go:661] "Version info" version="v1.24.1"
	I0327 18:13:06.342978       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 18:13:06.343266       1 config.go:317] "Starting service config controller"
	I0327 18:13:06.343277       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0327 18:13:06.343322       1 config.go:226] "Starting endpoint slice config controller"
	I0327 18:13:06.343327       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0327 18:13:06.343582       1 config.go:444] "Starting node config controller"
	I0327 18:13:06.343604       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0327 18:13:06.444623       1 shared_informer.go:262] Caches are synced for node config
	I0327 18:13:06.444670       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0327 18:13:06.444686       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [cf4c511a035a] <==
	W0327 18:12:49.012436       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 18:12:49.012451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 18:12:49.012511       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 18:12:49.012534       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 18:12:49.012560       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 18:12:49.012566       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 18:12:49.012599       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 18:12:49.012606       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0327 18:12:49.012667       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 18:12:49.012674       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 18:12:49.012709       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 18:12:49.012721       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0327 18:12:49.012741       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 18:12:49.012747       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0327 18:12:49.013039       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 18:12:49.013049       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 18:12:49.855880       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 18:12:49.856269       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0327 18:12:49.856425       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 18:12:49.856454       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 18:12:49.928416       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 18:12:49.928519       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 18:12:49.973451       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 18:12:49.973613       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0327 18:12:52.910991       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-03-27 18:07:56 UTC, ends at Wed 2024-03-27 18:17:09 UTC. --
	Mar 27 18:12:52 running-upgrade-599000 kubelet[11889]: I0327 18:12:52.946809   11889 reconciler.go:157] "Reconciler: start to sync state"
	Mar 27 18:12:53 running-upgrade-599000 kubelet[11889]: E0327 18:12:53.131789   11889 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-599000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-599000"
	Mar 27 18:12:53 running-upgrade-599000 kubelet[11889]: E0327 18:12:53.331106   11889 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-599000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-599000"
	Mar 27 18:12:53 running-upgrade-599000 kubelet[11889]: E0327 18:12:53.526847   11889 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-599000\" already exists" pod="kube-system/etcd-running-upgrade-599000"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.116315   11889 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.167541   11889 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.168119   11889 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.170283   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9a042942-0523-4709-bdb3-f8b3cd1465ac-tmp\") pod \"storage-provisioner\" (UID: \"9a042942-0523-4709-bdb3-f8b3cd1465ac\") " pod="kube-system/storage-provisioner"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.170358   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5nxt\" (UniqueName: \"kubernetes.io/projected/9a042942-0523-4709-bdb3-f8b3cd1465ac-kube-api-access-s5nxt\") pod \"storage-provisioner\" (UID: \"9a042942-0523-4709-bdb3-f8b3cd1465ac\") " pod="kube-system/storage-provisioner"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: E0327 18:13:05.274777   11889 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: E0327 18:13:05.274797   11889 projected.go:192] Error preparing data for projected volume kube-api-access-s5nxt for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: E0327 18:13:05.274849   11889 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/9a042942-0523-4709-bdb3-f8b3cd1465ac-kube-api-access-s5nxt podName:9a042942-0523-4709-bdb3-f8b3cd1465ac nodeName:}" failed. No retries permitted until 2024-03-27 18:13:05.774837002 +0000 UTC m=+14.296095741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s5nxt" (UniqueName: "kubernetes.io/projected/9a042942-0523-4709-bdb3-f8b3cd1465ac-kube-api-access-s5nxt") pod "storage-provisioner" (UID: "9a042942-0523-4709-bdb3-f8b3cd1465ac") : configmap "kube-root-ca.crt" not found
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.853508   11889 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.977035   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgnf6\" (UniqueName: \"kubernetes.io/projected/c5cebc6e-6c32-4985-a9b1-d370e3948d56-kube-api-access-bgnf6\") pod \"kube-proxy-fdz2q\" (UID: \"c5cebc6e-6c32-4985-a9b1-d370e3948d56\") " pod="kube-system/kube-proxy-fdz2q"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.977076   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5cebc6e-6c32-4985-a9b1-d370e3948d56-kube-proxy\") pod \"kube-proxy-fdz2q\" (UID: \"c5cebc6e-6c32-4985-a9b1-d370e3948d56\") " pod="kube-system/kube-proxy-fdz2q"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.977089   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5cebc6e-6c32-4985-a9b1-d370e3948d56-xtables-lock\") pod \"kube-proxy-fdz2q\" (UID: \"c5cebc6e-6c32-4985-a9b1-d370e3948d56\") " pod="kube-system/kube-proxy-fdz2q"
	Mar 27 18:13:05 running-upgrade-599000 kubelet[11889]: I0327 18:13:05.977100   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5cebc6e-6c32-4985-a9b1-d370e3948d56-lib-modules\") pod \"kube-proxy-fdz2q\" (UID: \"c5cebc6e-6c32-4985-a9b1-d370e3948d56\") " pod="kube-system/kube-proxy-fdz2q"
	Mar 27 18:13:06 running-upgrade-599000 kubelet[11889]: I0327 18:13:06.152432   11889 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 18:13:06 running-upgrade-599000 kubelet[11889]: I0327 18:13:06.159848   11889 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 18:13:06 running-upgrade-599000 kubelet[11889]: I0327 18:13:06.179382   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1677e020-99e6-4781-9c47-b5912b878c6f-config-volume\") pod \"coredns-6d4b75cb6d-xft6j\" (UID: \"1677e020-99e6-4781-9c47-b5912b878c6f\") " pod="kube-system/coredns-6d4b75cb6d-xft6j"
	Mar 27 18:13:06 running-upgrade-599000 kubelet[11889]: I0327 18:13:06.179564   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqft\" (UniqueName: \"kubernetes.io/projected/1677e020-99e6-4781-9c47-b5912b878c6f-kube-api-access-sxqft\") pod \"coredns-6d4b75cb6d-xft6j\" (UID: \"1677e020-99e6-4781-9c47-b5912b878c6f\") " pod="kube-system/coredns-6d4b75cb6d-xft6j"
	Mar 27 18:13:06 running-upgrade-599000 kubelet[11889]: I0327 18:13:06.179744   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrbtd\" (UniqueName: \"kubernetes.io/projected/4f9fee54-e580-4a1d-8daa-517c0a9dc299-kube-api-access-lrbtd\") pod \"coredns-6d4b75cb6d-c9hd8\" (UID: \"4f9fee54-e580-4a1d-8daa-517c0a9dc299\") " pod="kube-system/coredns-6d4b75cb6d-c9hd8"
	Mar 27 18:13:06 running-upgrade-599000 kubelet[11889]: I0327 18:13:06.179760   11889 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f9fee54-e580-4a1d-8daa-517c0a9dc299-config-volume\") pod \"coredns-6d4b75cb6d-c9hd8\" (UID: \"4f9fee54-e580-4a1d-8daa-517c0a9dc299\") " pod="kube-system/coredns-6d4b75cb6d-c9hd8"
	Mar 27 18:16:54 running-upgrade-599000 kubelet[11889]: I0327 18:16:54.916207   11889 scope.go:110] "RemoveContainer" containerID="110c0cc413458424860f3c362b2118b1d91e4715399b076b39eb733b2be83112"
	Mar 27 18:16:54 running-upgrade-599000 kubelet[11889]: I0327 18:16:54.932155   11889 scope.go:110] "RemoveContainer" containerID="7aae4d9bf2e6674e4b4a5f409f40a88ce745e4cd533a9f6bc69b1d79ef52bb5c"
	
	
	==> storage-provisioner [a5f7f8972997] <==
	I0327 18:13:06.248914       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 18:13:06.258230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 18:13:06.258290       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 18:13:06.261756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 18:13:06.261807       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-599000_b714b536-df49-4b2d-b1c1-9b1b7220b265!
	I0327 18:13:06.262116       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a7c9208-4b82-43e7-85d0-4fa49b594db2", APIVersion:"v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-599000_b714b536-df49-4b2d-b1c1-9b1b7220b265 became leader
	I0327 18:13:06.362687       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-599000_b714b536-df49-4b2d-b1c1-9b1b7220b265!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-599000 -n running-upgrade-599000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-599000 -n running-upgrade-599000: exit status 2 (15.644688084s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-599000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-599000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-599000
--- FAIL: TestRunningBinaryUpgrade (620.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-600000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-600000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.817524042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-600000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-600000" primary control-plane node in "kubernetes-upgrade-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:10:04.874140    9262 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:10:04.874254    9262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:10:04.874257    9262 out.go:304] Setting ErrFile to fd 2...
	I0327 11:10:04.874260    9262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:10:04.874381    9262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:10:04.875482    9262 out.go:298] Setting JSON to false
	I0327 11:10:04.892054    9262 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5977,"bootTime":1711557027,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:10:04.892120    9262 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:10:04.898402    9262 out.go:177] * [kubernetes-upgrade-600000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:10:04.906443    9262 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:10:04.906533    9262 notify.go:220] Checking for updates...
	I0327 11:10:04.911402    9262 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:10:04.915374    9262 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:10:04.918424    9262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:10:04.922439    9262 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:10:04.925430    9262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:10:04.928719    9262 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:10:04.928789    9262 config.go:182] Loaded profile config "running-upgrade-599000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:10:04.928846    9262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:10:04.931379    9262 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:10:04.938382    9262 start.go:297] selected driver: qemu2
	I0327 11:10:04.938388    9262 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:10:04.938395    9262 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:10:04.940707    9262 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:10:04.944410    9262 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:10:04.947422    9262 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 11:10:04.947457    9262 cni.go:84] Creating CNI manager for ""
	I0327 11:10:04.947464    9262 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 11:10:04.947503    9262 start.go:340] cluster config:
	{Name:kubernetes-upgrade-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:10:04.952318    9262 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:10:04.960255    9262 out.go:177] * Starting "kubernetes-upgrade-600000" primary control-plane node in "kubernetes-upgrade-600000" cluster
	I0327 11:10:04.964386    9262 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 11:10:04.964424    9262 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 11:10:04.964434    9262 cache.go:56] Caching tarball of preloaded images
	I0327 11:10:04.964536    9262 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:10:04.964544    9262 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 11:10:04.964601    9262 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/kubernetes-upgrade-600000/config.json ...
	I0327 11:10:04.964613    9262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/kubernetes-upgrade-600000/config.json: {Name:mk2226259f99704e9226c5b966174f27e079ffd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:10:04.964826    9262 start.go:360] acquireMachinesLock for kubernetes-upgrade-600000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:10:04.964857    9262 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "kubernetes-upgrade-600000"
	I0327 11:10:04.964871    9262 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Cluste
rName:kubernetes-upgrade-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:10:04.964904    9262 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:10:04.969179    9262 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:10:04.993821    9262 start.go:159] libmachine.API.Create for "kubernetes-upgrade-600000" (driver="qemu2")
	I0327 11:10:04.993844    9262 client.go:168] LocalClient.Create starting
	I0327 11:10:04.993915    9262 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:10:04.993945    9262 main.go:141] libmachine: Decoding PEM data...
	I0327 11:10:04.993959    9262 main.go:141] libmachine: Parsing certificate...
	I0327 11:10:04.994003    9262 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:10:04.994029    9262 main.go:141] libmachine: Decoding PEM data...
	I0327 11:10:04.994035    9262 main.go:141] libmachine: Parsing certificate...
	I0327 11:10:04.994404    9262 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:10:05.167582    9262 main.go:141] libmachine: Creating SSH key...
	I0327 11:10:05.263890    9262 main.go:141] libmachine: Creating Disk image...
	I0327 11:10:05.263900    9262 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:10:05.264079    9262 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2
	I0327 11:10:05.278332    9262 main.go:141] libmachine: STDOUT: 
	I0327 11:10:05.278359    9262 main.go:141] libmachine: STDERR: 
	I0327 11:10:05.278418    9262 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2 +20000M
	I0327 11:10:05.289862    9262 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:10:05.289883    9262 main.go:141] libmachine: STDERR: 
	I0327 11:10:05.289910    9262 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2
	I0327 11:10:05.289916    9262 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:10:05.289947    9262 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:79:00:62:26:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2
	I0327 11:10:05.291862    9262 main.go:141] libmachine: STDOUT: 
	I0327 11:10:05.291880    9262 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:10:05.291902    9262 client.go:171] duration metric: took 298.0535ms to LocalClient.Create
	I0327 11:10:07.293571    9262 start.go:128] duration metric: took 2.328671875s to createHost
	I0327 11:10:07.293630    9262 start.go:83] releasing machines lock for "kubernetes-upgrade-600000", held for 2.328782125s
	W0327 11:10:07.293667    9262 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:10:07.302454    9262 out.go:177] * Deleting "kubernetes-upgrade-600000" in qemu2 ...
	W0327 11:10:07.321776    9262 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:10:07.321782    9262 start.go:728] Will try again in 5 seconds ...
	I0327 11:10:12.323901    9262 start.go:360] acquireMachinesLock for kubernetes-upgrade-600000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:10:12.324365    9262 start.go:364] duration metric: took 368.458µs to acquireMachinesLock for "kubernetes-upgrade-600000"
	I0327 11:10:12.324535    9262 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Cluste
rName:kubernetes-upgrade-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:10:12.324784    9262 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:10:12.334432    9262 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:10:12.378524    9262 start.go:159] libmachine.API.Create for "kubernetes-upgrade-600000" (driver="qemu2")
	I0327 11:10:12.378580    9262 client.go:168] LocalClient.Create starting
	I0327 11:10:12.378705    9262 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:10:12.378781    9262 main.go:141] libmachine: Decoding PEM data...
	I0327 11:10:12.378797    9262 main.go:141] libmachine: Parsing certificate...
	I0327 11:10:12.378876    9262 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:10:12.378924    9262 main.go:141] libmachine: Decoding PEM data...
	I0327 11:10:12.378938    9262 main.go:141] libmachine: Parsing certificate...
	I0327 11:10:12.379450    9262 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:10:12.546045    9262 main.go:141] libmachine: Creating SSH key...
	I0327 11:10:12.587818    9262 main.go:141] libmachine: Creating Disk image...
	I0327 11:10:12.587830    9262 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:10:12.588003    9262 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2
	I0327 11:10:12.600201    9262 main.go:141] libmachine: STDOUT: 
	I0327 11:10:12.600221    9262 main.go:141] libmachine: STDERR: 
	I0327 11:10:12.600269    9262 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2 +20000M
	I0327 11:10:12.610913    9262 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:10:12.610933    9262 main.go:141] libmachine: STDERR: 
	I0327 11:10:12.610945    9262 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2
	I0327 11:10:12.610949    9262 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:10:12.610993    9262 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:1a:8a:db:da:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2
	I0327 11:10:12.612772    9262 main.go:141] libmachine: STDOUT: 
	I0327 11:10:12.612788    9262 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:10:12.612799    9262 client.go:171] duration metric: took 234.217ms to LocalClient.Create
	I0327 11:10:14.615007    9262 start.go:128] duration metric: took 2.290199416s to createHost
	I0327 11:10:14.615093    9262 start.go:83] releasing machines lock for "kubernetes-upgrade-600000", held for 2.290694667s
	W0327 11:10:14.615478    9262 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:10:14.625157    9262 out.go:177] 
	W0327 11:10:14.631317    9262 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:10:14.631376    9262 out.go:239] * 
	* 
	W0327 11:10:14.633856    9262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:10:14.643104    9262 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-600000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-600000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-600000: (2.848765416s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-600000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-600000 status --format={{.Host}}: exit status 7 (60.147583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-600000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-600000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.198368709s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-600000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-600000" primary control-plane node in "kubernetes-upgrade-600000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-600000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-600000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:10:17.602880    9304 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:10:17.603007    9304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:10:17.603010    9304 out.go:304] Setting ErrFile to fd 2...
	I0327 11:10:17.603012    9304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:10:17.603132    9304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:10:17.604117    9304 out.go:298] Setting JSON to false
	I0327 11:10:17.621376    9304 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5990,"bootTime":1711557027,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:10:17.621445    9304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:10:17.625281    9304 out.go:177] * [kubernetes-upgrade-600000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:10:17.633125    9304 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:10:17.637098    9304 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:10:17.633212    9304 notify.go:220] Checking for updates...
	I0327 11:10:17.645243    9304 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:10:17.652092    9304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:10:17.655069    9304 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:10:17.658110    9304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:10:17.662242    9304 config.go:182] Loaded profile config "kubernetes-upgrade-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 11:10:17.662488    9304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:10:17.667037    9304 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:10:17.674001    9304 start.go:297] selected driver: qemu2
	I0327 11:10:17.674006    9304 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterNa
me:kubernetes-upgrade-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:10:17.674053    9304 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:10:17.676528    9304 cni.go:84] Creating CNI manager for ""
	I0327 11:10:17.676547    9304 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:10:17.676570    9304 start.go:340] cluster config:
	{Name:kubernetes-upgrade-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-600000 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmn
et/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:10:17.680850    9304 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:10:17.687100    9304 out.go:177] * Starting "kubernetes-upgrade-600000" primary control-plane node in "kubernetes-upgrade-600000" cluster
	I0327 11:10:17.691065    9304 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 11:10:17.691079    9304 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 11:10:17.691086    9304 cache.go:56] Caching tarball of preloaded images
	I0327 11:10:17.691140    9304 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:10:17.691145    9304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 11:10:17.691189    9304 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/kubernetes-upgrade-600000/config.json ...
	I0327 11:10:17.691643    9304 start.go:360] acquireMachinesLock for kubernetes-upgrade-600000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:10:17.691666    9304 start.go:364] duration metric: took 17.458µs to acquireMachinesLock for "kubernetes-upgrade-600000"
	I0327 11:10:17.691675    9304 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:10:17.691680    9304 fix.go:54] fixHost starting: 
	I0327 11:10:17.691789    9304 fix.go:112] recreateIfNeeded on kubernetes-upgrade-600000: state=Stopped err=<nil>
	W0327 11:10:17.691798    9304 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:10:17.700042    9304 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-600000" ...
	I0327 11:10:17.704154    9304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:1a:8a:db:da:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2
	I0327 11:10:17.705956    9304 main.go:141] libmachine: STDOUT: 
	I0327 11:10:17.705976    9304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:10:17.706002    9304 fix.go:56] duration metric: took 14.322083ms for fixHost
	I0327 11:10:17.706007    9304 start.go:83] releasing machines lock for "kubernetes-upgrade-600000", held for 14.337583ms
	W0327 11:10:17.706015    9304 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:10:17.706044    9304 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:10:17.706049    9304 start.go:728] Will try again in 5 seconds ...
	I0327 11:10:22.708209    9304 start.go:360] acquireMachinesLock for kubernetes-upgrade-600000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:10:22.708667    9304 start.go:364] duration metric: took 354µs to acquireMachinesLock for "kubernetes-upgrade-600000"
	I0327 11:10:22.708735    9304 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:10:22.708752    9304 fix.go:54] fixHost starting: 
	I0327 11:10:22.709517    9304 fix.go:112] recreateIfNeeded on kubernetes-upgrade-600000: state=Stopped err=<nil>
	W0327 11:10:22.709545    9304 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:10:22.719962    9304 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-600000" ...
	I0327 11:10:22.724274    9304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:1a:8a:db:da:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubernetes-upgrade-600000/disk.qcow2
	I0327 11:10:22.734569    9304 main.go:141] libmachine: STDOUT: 
	I0327 11:10:22.734645    9304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:10:22.734739    9304 fix.go:56] duration metric: took 25.989166ms for fixHost
	I0327 11:10:22.734765    9304 start.go:83] releasing machines lock for "kubernetes-upgrade-600000", held for 26.075875ms
	W0327 11:10:22.734968    9304 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:10:22.741992    9304 out.go:177] 
	W0327 11:10:22.745107    9304 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:10:22.745132    9304 out.go:239] * 
	* 
	W0327 11:10:22.747477    9304 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:10:22.757021    9304 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-600000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-600000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-600000 version --output=json: exit status 1 (61.291416ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-600000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-27 11:10:22.834089 -0700 PDT m=+1138.643658043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-600000 -n kubernetes-upgrade-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-600000 -n kubernetes-upgrade-600000: exit status 7 (34.813791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-600000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-600000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-600000
--- FAIL: TestKubernetesUpgrade (18.13s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.2s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18517
- KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2380711806/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.20s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18517
- KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current318446929/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (580.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2559600960 start -p stopped-upgrade-042000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2559600960 start -p stopped-upgrade-042000 --memory=2200 --vm-driver=qemu2 : (45.757245875s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2559600960 -p stopped-upgrade-042000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2559600960 -p stopped-upgrade-042000 stop: (12.115785417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-042000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-042000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.340072875s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-042000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-042000" primary control-plane node in "stopped-upgrade-042000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-042000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:11:26.312392    9387 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:11:26.312559    9387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:11:26.312563    9387 out.go:304] Setting ErrFile to fd 2...
	I0327 11:11:26.312566    9387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:11:26.312709    9387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:11:26.313834    9387 out.go:298] Setting JSON to false
	I0327 11:11:26.333114    9387 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6059,"bootTime":1711557027,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:11:26.333176    9387 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:11:26.335604    9387 out.go:177] * [stopped-upgrade-042000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:11:26.343793    9387 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:11:26.343830    9387 notify.go:220] Checking for updates...
	I0327 11:11:26.347733    9387 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:11:26.351728    9387 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:11:26.355533    9387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:11:26.358764    9387 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:11:26.361845    9387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:11:26.365181    9387 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:11:26.368687    9387 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 11:11:26.371769    9387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:11:26.375745    9387 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:11:26.382731    9387 start.go:297] selected driver: qemu2
	I0327 11:11:26.382737    9387 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51357 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 11:11:26.382810    9387 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:11:26.385525    9387 cni.go:84] Creating CNI manager for ""
	I0327 11:11:26.385544    9387 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:11:26.385573    9387 start.go:340] cluster config:
	{Name:stopped-upgrade-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51357 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 11:11:26.385631    9387 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:11:26.394715    9387 out.go:177] * Starting "stopped-upgrade-042000" primary control-plane node in "stopped-upgrade-042000" cluster
	I0327 11:11:26.398728    9387 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 11:11:26.398743    9387 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0327 11:11:26.398748    9387 cache.go:56] Caching tarball of preloaded images
	I0327 11:11:26.398802    9387 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:11:26.398808    9387 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0327 11:11:26.398862    9387 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/config.json ...
	I0327 11:11:26.399333    9387 start.go:360] acquireMachinesLock for stopped-upgrade-042000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:11:26.399368    9387 start.go:364] duration metric: took 28.041µs to acquireMachinesLock for "stopped-upgrade-042000"
	I0327 11:11:26.399378    9387 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:11:26.399381    9387 fix.go:54] fixHost starting: 
	I0327 11:11:26.399489    9387 fix.go:112] recreateIfNeeded on stopped-upgrade-042000: state=Stopped err=<nil>
	W0327 11:11:26.399498    9387 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:11:26.403759    9387 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-042000" ...
	I0327 11:11:26.411749    9387 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51322-:22,hostfwd=tcp::51323-:2376,hostname=stopped-upgrade-042000 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/disk.qcow2
	I0327 11:11:26.458932    9387 main.go:141] libmachine: STDOUT: 
	I0327 11:11:26.458968    9387 main.go:141] libmachine: STDERR: 
	I0327 11:11:26.458973    9387 main.go:141] libmachine: Waiting for VM to start (ssh -p 51322 docker@127.0.0.1)...
	I0327 11:11:46.465016    9387 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/config.json ...
	I0327 11:11:46.465581    9387 machine.go:94] provisionDockerMachine start ...
	I0327 11:11:46.465723    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.465998    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.466008    9387 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 11:11:46.541662    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0327 11:11:46.541688    9387 buildroot.go:166] provisioning hostname "stopped-upgrade-042000"
	I0327 11:11:46.541756    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.541933    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.541943    9387 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-042000 && echo "stopped-upgrade-042000" | sudo tee /etc/hostname
	I0327 11:11:46.623936    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-042000
	
	I0327 11:11:46.624010    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.624197    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.624212    9387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-042000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-042000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-042000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 11:11:46.694251    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 11:11:46.694264    9387 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18517-6013/.minikube CaCertPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18517-6013/.minikube}
	I0327 11:11:46.694277    9387 buildroot.go:174] setting up certificates
	I0327 11:11:46.694282    9387 provision.go:84] configureAuth start
	I0327 11:11:46.694289    9387 provision.go:143] copyHostCerts
	I0327 11:11:46.694366    9387 exec_runner.go:144] found /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.pem, removing ...
	I0327 11:11:46.694373    9387 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.pem
	I0327 11:11:46.694474    9387 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.pem (1082 bytes)
	I0327 11:11:46.694663    9387 exec_runner.go:144] found /Users/jenkins/minikube-integration/18517-6013/.minikube/cert.pem, removing ...
	I0327 11:11:46.694667    9387 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18517-6013/.minikube/cert.pem
	I0327 11:11:46.694717    9387 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18517-6013/.minikube/cert.pem (1123 bytes)
	I0327 11:11:46.694831    9387 exec_runner.go:144] found /Users/jenkins/minikube-integration/18517-6013/.minikube/key.pem, removing ...
	I0327 11:11:46.694834    9387 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18517-6013/.minikube/key.pem
	I0327 11:11:46.694888    9387 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18517-6013/.minikube/key.pem (1679 bytes)
	I0327 11:11:46.694991    9387 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-042000 san=[127.0.0.1 localhost minikube stopped-upgrade-042000]
	I0327 11:11:46.817660    9387 provision.go:177] copyRemoteCerts
	I0327 11:11:46.817704    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 11:11:46.817712    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:11:46.853899    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0327 11:11:46.861028    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 11:11:46.867328    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0327 11:11:46.874492    9387 provision.go:87] duration metric: took 180.201916ms to configureAuth
	I0327 11:11:46.874501    9387 buildroot.go:189] setting minikube options for container-runtime
	I0327 11:11:46.874623    9387 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:11:46.874658    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.874742    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.874748    9387 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 11:11:46.940485    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 11:11:46.940496    9387 buildroot.go:70] root file system type: tmpfs
	I0327 11:11:46.940546    9387 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 11:11:46.940595    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:46.940694    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:46.940728    9387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 11:11:47.008186    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 11:11:47.008241    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:47.008355    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:47.008363    9387 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 11:11:47.381051    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0327 11:11:47.381064    9387 machine.go:97] duration metric: took 915.47975ms to provisionDockerMachine
	I0327 11:11:47.381071    9387 start.go:293] postStartSetup for "stopped-upgrade-042000" (driver="qemu2")
	I0327 11:11:47.381077    9387 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 11:11:47.381140    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 11:11:47.381159    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:11:47.414666    9387 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 11:11:47.415955    9387 info.go:137] Remote host: Buildroot 2021.02.12
	I0327 11:11:47.415962    9387 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18517-6013/.minikube/addons for local assets ...
	I0327 11:11:47.416039    9387 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18517-6013/.minikube/files for local assets ...
	I0327 11:11:47.416147    9387 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem -> 64762.pem in /etc/ssl/certs
	I0327 11:11:47.416275    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 11:11:47.419013    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem --> /etc/ssl/certs/64762.pem (1708 bytes)
	I0327 11:11:47.426123    9387 start.go:296] duration metric: took 45.04775ms for postStartSetup
	I0327 11:11:47.426138    9387 fix.go:56] duration metric: took 21.026892958s for fixHost
	I0327 11:11:47.426172    9387 main.go:141] libmachine: Using SSH client type: native
	I0327 11:11:47.426307    9387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104de5bf0] 0x104de8450 <nil>  [] 0s} localhost 51322 <nil> <nil>}
	I0327 11:11:47.426314    9387 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0327 11:11:47.493073    9387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711563107.933312629
	
	I0327 11:11:47.493083    9387 fix.go:216] guest clock: 1711563107.933312629
	I0327 11:11:47.493087    9387 fix.go:229] Guest: 2024-03-27 11:11:47.933312629 -0700 PDT Remote: 2024-03-27 11:11:47.42614 -0700 PDT m=+21.145543292 (delta=507.172629ms)
	I0327 11:11:47.493103    9387 fix.go:200] guest clock delta is within tolerance: 507.172629ms
	I0327 11:11:47.493105    9387 start.go:83] releasing machines lock for "stopped-upgrade-042000", held for 21.093868584s
	I0327 11:11:47.493171    9387 ssh_runner.go:195] Run: cat /version.json
	I0327 11:11:47.493183    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:11:47.493172    9387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 11:11:47.493204    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	W0327 11:11:47.493811    9387 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51322: connect: connection refused
	I0327 11:11:47.493832    9387 retry.go:31] will retry after 208.46166ms: dial tcp [::1]:51322: connect: connection refused
	W0327 11:11:47.528193    9387 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0327 11:11:47.528248    9387 ssh_runner.go:195] Run: systemctl --version
	I0327 11:11:47.530064    9387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 11:11:47.531690    9387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 11:11:47.531715    9387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0327 11:11:47.534749    9387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0327 11:11:47.539638    9387 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 11:11:47.539646    9387 start.go:494] detecting cgroup driver to use...
	I0327 11:11:47.539714    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 11:11:47.546871    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0327 11:11:47.550678    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 11:11:47.554040    9387 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 11:11:47.554069    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 11:11:47.557043    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 11:11:47.559889    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 11:11:47.563032    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 11:11:47.566419    9387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 11:11:47.569448    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 11:11:47.572327    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 11:11:47.575271    9387 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 11:11:47.578739    9387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 11:11:47.581792    9387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 11:11:47.584414    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:47.669867    9387 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 11:11:47.676788    9387 start.go:494] detecting cgroup driver to use...
	I0327 11:11:47.676865    9387 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 11:11:47.682356    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 11:11:47.687230    9387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 11:11:47.693100    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 11:11:47.697625    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 11:11:47.701971    9387 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 11:11:47.770656    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 11:11:47.787040    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 11:11:47.792340    9387 ssh_runner.go:195] Run: which cri-dockerd
	I0327 11:11:47.793532    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 11:11:47.795899    9387 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 11:11:47.800728    9387 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 11:11:47.876941    9387 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 11:11:47.953308    9387 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 11:11:47.953393    9387 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 11:11:47.958784    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:48.034716    9387 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 11:11:48.149611    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 11:11:48.154198    9387 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0327 11:11:48.159333    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 11:11:48.164201    9387 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 11:11:48.246111    9387 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 11:11:48.320631    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:48.402596    9387 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 11:11:48.408698    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 11:11:48.412857    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:48.494205    9387 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 11:11:48.537031    9387 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 11:11:48.537106    9387 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 11:11:48.539061    9387 start.go:562] Will wait 60s for crictl version
	I0327 11:11:48.539117    9387 ssh_runner.go:195] Run: which crictl
	I0327 11:11:48.540483    9387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 11:11:48.555510    9387 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0327 11:11:48.555578    9387 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 11:11:48.576880    9387 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 11:11:48.596971    9387 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0327 11:11:48.597099    9387 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0327 11:11:48.598508    9387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 11:11:48.602448    9387 kubeadm.go:877] updating cluster {Name:stopped-upgrade-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51357 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-042000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0327 11:11:48.602499    9387 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 11:11:48.602537    9387 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 11:11:48.617371    9387 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 11:11:48.617379    9387 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 11:11:48.617425    9387 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 11:11:48.620427    9387 ssh_runner.go:195] Run: which lz4
	I0327 11:11:48.621618    9387 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0327 11:11:48.622863    9387 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 11:11:48.622872    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0327 11:11:49.385638    9387 docker.go:649] duration metric: took 764.056875ms to copy over tarball
	I0327 11:11:49.385708    9387 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 11:11:50.571320    9387 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185606583s)
	I0327 11:11:50.571335    9387 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 11:11:50.587305    9387 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 11:11:50.590972    9387 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0327 11:11:50.595995    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:50.675476    9387 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 11:11:52.276635    9387 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.601153625s)
	I0327 11:11:52.276740    9387 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 11:11:52.289459    9387 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 11:11:52.289470    9387 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 11:11:52.289475    9387 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0327 11:11:52.296326    9387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:52.296431    9387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:52.296440    9387 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 11:11:52.296500    9387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:52.296511    9387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:52.296555    9387 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:52.296555    9387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:52.296604    9387 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:11:52.303677    9387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:52.305403    9387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:52.305417    9387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:52.305307    9387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:52.305486    9387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:52.305509    9387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:52.305545    9387 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 11:11:52.305602    9387 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0327 11:11:54.280866    9387 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 11:11:54.281295    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:11:54.308832    9387 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0327 11:11:54.308878    9387 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:11:54.308967    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0327 11:11:54.327259    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 11:11:54.327419    9387 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0327 11:11:54.330766    9387 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0327 11:11:54.330789    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0327 11:11:54.360577    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:54.366710    9387 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0327 11:11:54.366722    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0327 11:11:54.373732    9387 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0327 11:11:54.373754    9387 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:54.373807    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0327 11:11:54.409078    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:54.426360    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0327 11:11:54.430932    9387 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0327 11:11:54.430983    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0327 11:11:54.431004    9387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0327 11:11:54.431032    9387 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:54.431073    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0327 11:11:54.440694    9387 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0327 11:11:54.440716    9387 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0327 11:11:54.440776    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0327 11:11:54.441853    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:54.442726    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0327 11:11:54.446641    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:54.450754    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:54.455517    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 11:11:54.455630    9387 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0327 11:11:54.456147    9387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0327 11:11:54.456163    9387 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:54.456196    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0327 11:11:54.459668    9387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0327 11:11:54.459686    9387 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:54.459734    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 11:11:54.473024    9387 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0327 11:11:54.473037    9387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0327 11:11:54.473062    9387 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:54.473060    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0327 11:11:54.473111    9387 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0327 11:11:54.476569    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0327 11:11:54.476619    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0327 11:11:54.482552    9387 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0327 11:11:54.482565    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0327 11:11:54.486256    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0327 11:11:54.513404    9387 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0327 11:11:54.852002    9387 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 11:11:54.852506    9387 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:54.890869    9387 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0327 11:11:54.890912    9387 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:54.891024    9387 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:11:54.916524    9387 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 11:11:54.916680    9387 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0327 11:11:54.918723    9387 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0327 11:11:54.918739    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0327 11:11:54.950911    9387 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0327 11:11:54.950924    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0327 11:11:55.186108    9387 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0327 11:11:55.186145    9387 cache_images.go:92] duration metric: took 2.896680917s to LoadCachedImages
	W0327 11:11:55.186186    9387 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0327 11:11:55.186192    9387 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0327 11:11:55.186238    9387 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-042000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 11:11:55.186314    9387 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 11:11:55.199182    9387 cni.go:84] Creating CNI manager for ""
	I0327 11:11:55.199194    9387 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:11:55.199205    9387 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 11:11:55.199214    9387 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-042000 NodeName:stopped-upgrade-042000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 11:11:55.199284    9387 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-042000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 11:11:55.199343    9387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0327 11:11:55.202608    9387 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 11:11:55.202639    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 11:11:55.205805    9387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0327 11:11:55.210658    9387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 11:11:55.215264    9387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0327 11:11:55.220378    9387 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0327 11:11:55.221486    9387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 11:11:55.224819    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:11:55.306748    9387 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 11:11:55.312808    9387 certs.go:68] Setting up /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000 for IP: 10.0.2.15
	I0327 11:11:55.312817    9387 certs.go:194] generating shared ca certs ...
	I0327 11:11:55.312825    9387 certs.go:226] acquiring lock for ca certs: {Name:mk7b549dba732db56799e548b1d7e96b357996ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:11:55.312991    9387 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.key
	I0327 11:11:55.313731    9387 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/proxy-client-ca.key
	I0327 11:11:55.313737    9387 certs.go:256] generating profile certs ...
	I0327 11:11:55.313979    9387 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.key
	I0327 11:11:55.313997    9387 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key.bba745c7
	I0327 11:11:55.314008    9387 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt.bba745c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0327 11:11:55.399846    9387 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt.bba745c7 ...
	I0327 11:11:55.399862    9387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt.bba745c7: {Name:mk4d87191c8a2c07bfc811c69b28645fdd58add9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:11:55.400171    9387 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key.bba745c7 ...
	I0327 11:11:55.400179    9387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key.bba745c7: {Name:mk6de3173a0fb4042d5e3d484a5b43b481f6734e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:11:55.400321    9387 certs.go:381] copying /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt.bba745c7 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt
	I0327 11:11:55.400465    9387 certs.go:385] copying /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key.bba745c7 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key
	I0327 11:11:55.400744    9387 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/proxy-client.key
	I0327 11:11:55.400921    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/6476.pem (1338 bytes)
	W0327 11:11:55.401128    9387 certs.go:480] ignoring /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/6476_empty.pem, impossibly tiny 0 bytes
	I0327 11:11:55.401134    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 11:11:55.401158    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem (1082 bytes)
	I0327 11:11:55.401180    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem (1123 bytes)
	I0327 11:11:55.401198    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/key.pem (1679 bytes)
	I0327 11:11:55.401241    9387 certs.go:484] found cert: /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem (1708 bytes)
	I0327 11:11:55.401571    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 11:11:55.408042    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 11:11:55.414850    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 11:11:55.422500    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 11:11:55.429393    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 11:11:55.436223    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 11:11:55.443139    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 11:11:55.450536    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 11:11:55.457174    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 11:11:55.463322    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/6476.pem --> /usr/share/ca-certificates/6476.pem (1338 bytes)
	I0327 11:11:55.470336    9387 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/ssl/certs/64762.pem --> /usr/share/ca-certificates/64762.pem (1708 bytes)
	I0327 11:11:55.477089    9387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 11:11:55.481923    9387 ssh_runner.go:195] Run: openssl version
	I0327 11:11:55.483757    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 11:11:55.487082    9387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 11:11:55.488438    9387 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 18:08 /usr/share/ca-certificates/minikubeCA.pem
	I0327 11:11:55.488460    9387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 11:11:55.490078    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 11:11:55.492879    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6476.pem && ln -fs /usr/share/ca-certificates/6476.pem /etc/ssl/certs/6476.pem"
	I0327 11:11:55.495636    9387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6476.pem
	I0327 11:11:55.497085    9387 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 17:55 /usr/share/ca-certificates/6476.pem
	I0327 11:11:55.497103    9387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6476.pem
	I0327 11:11:55.498790    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6476.pem /etc/ssl/certs/51391683.0"
	I0327 11:11:55.502070    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64762.pem && ln -fs /usr/share/ca-certificates/64762.pem /etc/ssl/certs/64762.pem"
	I0327 11:11:55.504800    9387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64762.pem
	I0327 11:11:55.506035    9387 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 17:55 /usr/share/ca-certificates/64762.pem
	I0327 11:11:55.506051    9387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64762.pem
	I0327 11:11:55.507766    9387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64762.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 11:11:55.511120    9387 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 11:11:55.512587    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 11:11:55.514760    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 11:11:55.517081    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 11:11:55.519031    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 11:11:55.520781    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 11:11:55.522513    9387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 11:11:55.524347    9387 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51357 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-042000 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 11:11:55.524411    9387 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 11:11:55.534606    9387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 11:11:55.538023    9387 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 11:11:55.538030    9387 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 11:11:55.538034    9387 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 11:11:55.538062    9387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 11:11:55.540783    9387 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 11:11:55.541064    9387 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-042000" does not appear in /Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:11:55.541157    9387 kubeconfig.go:62] /Users/jenkins/minikube-integration/18517-6013/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-042000" cluster setting kubeconfig missing "stopped-upgrade-042000" context setting]
	I0327 11:11:55.541330    9387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/kubeconfig: {Name:mkbb2a293ed6f55faa985236f49efd479958575e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:11:55.541773    9387 kapi.go:59] client config for stopped-upgrade-042000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.key", CAFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060d6b90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 11:11:55.542205    9387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 11:11:55.544858    9387 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-042000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0327 11:11:55.544865    9387 kubeadm.go:1154] stopping kube-system containers ...
	I0327 11:11:55.544900    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 11:11:55.556022    9387 docker.go:483] Stopping containers: [0dc5ad39c5a3 2358dbbd7710 cef10ae8ee0a 564c237001b0 7d1480bb2f78 966c2fcaeaec 2ce9bf2e1447 5223b5e56540]
	I0327 11:11:55.556090    9387 ssh_runner.go:195] Run: docker stop 0dc5ad39c5a3 2358dbbd7710 cef10ae8ee0a 564c237001b0 7d1480bb2f78 966c2fcaeaec 2ce9bf2e1447 5223b5e56540
	I0327 11:11:55.567193    9387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 11:11:55.572883    9387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 11:11:55.575516    9387 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 11:11:55.575521    9387 kubeadm.go:156] found existing configuration files:
	
	I0327 11:11:55.575545    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/admin.conf
	I0327 11:11:55.578463    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 11:11:55.578482    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 11:11:55.581247    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/kubelet.conf
	I0327 11:11:55.583532    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 11:11:55.583549    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 11:11:55.586525    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/controller-manager.conf
	I0327 11:11:55.589126    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 11:11:55.589147    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 11:11:55.591475    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/scheduler.conf
	I0327 11:11:55.594445    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 11:11:55.594466    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 11:11:55.597152    9387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 11:11:55.599763    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:55.621813    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:56.066174    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:56.195944    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:56.216679    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 11:11:56.239995    9387 api_server.go:52] waiting for apiserver process to appear ...
	I0327 11:11:56.240068    9387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:11:56.742107    9387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:11:57.242192    9387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:11:57.250063    9387 api_server.go:72] duration metric: took 1.010075375s to wait for apiserver process to appear ...
	I0327 11:11:57.250075    9387 api_server.go:88] waiting for apiserver healthz status ...
	I0327 11:11:57.250090    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:02.252181    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:02.252223    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:07.252458    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:07.252506    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:12.252846    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:12.252904    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:17.253523    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:17.253569    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:22.254153    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:22.254186    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:27.254999    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:27.255046    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:32.256117    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:32.256167    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:37.257630    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:37.257675    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:42.259408    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:42.259434    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:47.261566    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:47.261627    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:52.263801    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:52.263819    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:12:57.266018    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:12:57.266320    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:12:57.291609    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:12:57.291735    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:12:57.310085    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:12:57.310173    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:12:57.323022    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:12:57.323100    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:12:57.334306    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:12:57.334389    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:12:57.346419    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:12:57.346486    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:12:57.360659    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:12:57.360732    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:12:57.380211    9387 logs.go:276] 0 containers: []
	W0327 11:12:57.380221    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:12:57.380277    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:12:57.390678    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:12:57.390702    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:12:57.390706    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:12:57.406063    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:12:57.406073    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:12:57.416815    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:12:57.416825    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:12:57.434450    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:12:57.434460    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:12:57.445584    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:12:57.445594    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:12:57.458455    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:12:57.458464    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:12:57.473025    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:12:57.473035    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:12:57.493779    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:12:57.493790    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:12:57.507243    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:12:57.507257    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:12:57.524483    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:12:57.524493    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:12:57.632924    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:12:57.632944    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:12:57.645229    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:12:57.645244    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:12:57.649814    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:12:57.649822    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:12:57.689283    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:12:57.689299    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:12:57.707139    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:12:57.707152    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:12:57.723766    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:12:57.723776    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:12:57.749225    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:12:57.749241    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:00.288642    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:05.290903    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:05.291046    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:05.303069    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:05.303145    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:05.314094    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:05.314168    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:05.327268    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:05.327332    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:05.339594    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:05.339666    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:05.349870    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:05.349932    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:05.361026    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:05.361094    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:05.371675    9387 logs.go:276] 0 containers: []
	W0327 11:13:05.371693    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:05.371763    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:05.381958    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:05.381978    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:05.381982    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:05.396561    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:05.396574    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:05.409429    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:05.409441    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:05.422674    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:05.422685    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:05.459906    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:05.459918    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:05.471697    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:05.471709    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:05.484271    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:05.484283    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:05.522220    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:05.522232    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:05.537648    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:05.537660    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:05.558767    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:05.558780    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:05.573472    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:05.573483    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:05.610592    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:05.610605    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:05.623362    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:05.623372    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:05.642952    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:05.642965    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:05.667859    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:05.667873    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:05.672022    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:05.672033    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:05.687341    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:05.687360    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:08.203734    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:13.205816    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:13.206202    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:13.250432    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:13.250582    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:13.271216    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:13.271310    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:13.285435    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:13.285525    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:13.297961    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:13.298038    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:13.311832    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:13.311896    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:13.322977    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:13.323050    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:13.332837    9387 logs.go:276] 0 containers: []
	W0327 11:13:13.332850    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:13.332907    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:13.344613    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:13.344632    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:13.344639    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:13.349290    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:13.349298    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:13.363382    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:13.363391    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:13.375124    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:13.375134    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:13.386346    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:13.386356    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:13.398131    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:13.398140    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:13.421691    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:13.421700    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:13.462056    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:13.462066    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:13.480824    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:13.480835    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:13.518674    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:13.518686    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:13.533413    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:13.533425    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:13.550954    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:13.550965    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:13.570726    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:13.570736    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:13.582595    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:13.582606    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:13.621534    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:13.621545    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:13.636528    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:13.636539    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:13.647953    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:13.647964    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:16.165116    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:21.167335    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:21.167500    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:21.178738    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:21.178815    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:21.194939    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:21.195008    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:21.205315    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:21.205378    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:21.215658    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:21.215735    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:21.226017    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:21.226091    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:21.247531    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:21.247600    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:21.257522    9387 logs.go:276] 0 containers: []
	W0327 11:13:21.257534    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:21.257597    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:21.275755    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:21.275788    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:21.275794    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:21.314812    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:21.318375    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:21.330840    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:21.330854    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:21.350406    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:21.350418    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:21.361756    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:21.361768    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:21.398058    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:21.398068    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:21.401934    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:21.401940    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:21.415951    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:21.415965    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:21.431160    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:21.431172    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:21.446144    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:21.446154    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:21.458410    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:21.458425    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:21.494310    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:21.494324    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:21.511826    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:21.511843    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:21.523941    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:21.523953    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:21.539721    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:21.539732    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:21.552312    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:21.552324    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:21.573417    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:21.573432    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:24.100410    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:29.102529    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:29.102706    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:29.115121    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:29.115200    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:29.130647    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:29.130709    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:29.141626    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:29.141693    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:29.152711    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:29.152784    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:29.164528    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:29.164598    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:29.175283    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:29.175350    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:29.185258    9387 logs.go:276] 0 containers: []
	W0327 11:13:29.185268    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:29.185316    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:29.195061    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:29.195080    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:29.195085    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:29.209642    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:29.209654    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:29.221294    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:29.221305    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:29.236501    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:29.236512    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:29.248087    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:29.248101    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:29.259262    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:29.259273    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:29.296396    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:29.296408    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:29.300871    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:29.300878    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:29.314632    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:29.314643    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:29.352317    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:29.352332    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:29.367991    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:29.368003    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:29.379730    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:29.379741    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:29.398437    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:29.398446    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:29.413221    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:29.413235    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:29.437791    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:29.437798    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:29.473834    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:29.473845    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:29.485911    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:29.485922    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:31.999795    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:37.002001    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:37.002181    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:37.018823    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:37.018908    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:37.031605    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:37.031671    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:37.043282    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:37.043343    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:37.054193    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:37.054259    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:37.065711    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:37.065779    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:37.078301    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:37.078365    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:37.088195    9387 logs.go:276] 0 containers: []
	W0327 11:13:37.088210    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:37.088263    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:37.098548    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:37.098568    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:37.098574    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:37.114206    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:37.114225    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:37.138169    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:37.138177    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:37.175296    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:37.175307    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:37.179276    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:37.179283    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:37.219081    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:37.219091    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:37.233603    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:37.233613    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:37.248406    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:37.248415    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:37.260475    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:37.260490    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:37.278487    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:37.278498    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:37.289600    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:37.289611    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:37.302004    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:37.302018    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:37.314160    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:37.314171    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:37.325773    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:37.325784    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:37.366632    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:37.366645    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:37.382394    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:37.382407    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:37.396694    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:37.396706    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:39.911735    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:44.914029    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:44.914470    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:44.949181    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:44.949308    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:44.971605    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:44.971689    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:44.985270    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:44.985353    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:44.997912    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:44.997982    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:45.008641    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:45.008697    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:45.019186    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:45.019253    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:45.029497    9387 logs.go:276] 0 containers: []
	W0327 11:13:45.029509    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:45.029566    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:45.040512    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:45.040532    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:45.040538    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:45.082979    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:45.082992    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:45.096775    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:45.096786    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:45.108417    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:45.108429    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:45.119590    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:45.119602    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:45.143339    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:45.143350    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:45.157762    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:45.157773    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:45.170332    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:45.170343    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:45.187736    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:45.187746    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:45.202654    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:45.202664    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:45.241612    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:45.241621    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:45.245829    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:45.245837    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:45.259615    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:45.259624    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:45.274129    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:45.274140    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:45.289448    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:45.289460    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:45.327995    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:45.328008    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:45.342130    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:45.342143    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:47.856836    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:13:52.859340    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:13:52.859483    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:13:52.874212    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:13:52.874291    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:13:52.886149    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:13:52.886220    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:13:52.896589    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:13:52.896656    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:13:52.906926    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:13:52.906993    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:13:52.925382    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:13:52.925447    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:13:52.935559    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:13:52.935617    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:13:52.953585    9387 logs.go:276] 0 containers: []
	W0327 11:13:52.953596    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:13:52.953653    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:13:52.964517    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:13:52.964534    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:13:52.964540    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:13:52.975355    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:13:52.975367    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:13:52.991403    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:13:52.991419    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:13:53.005800    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:13:53.005811    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:13:53.017193    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:13:53.017206    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:13:53.053501    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:13:53.053513    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:13:53.066993    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:13:53.067004    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:13:53.080668    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:13:53.080681    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:13:53.119626    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:13:53.119638    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:13:53.132035    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:13:53.132045    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:13:53.144537    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:13:53.144551    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:13:53.148684    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:13:53.148691    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:13:53.163311    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:13:53.163322    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:13:53.177717    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:13:53.177727    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:13:53.192618    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:13:53.192629    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:13:53.209453    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:13:53.209465    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:13:53.233643    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:13:53.233651    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:13:55.773096    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:00.775471    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:00.775806    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:00.808072    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:00.808200    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:00.827630    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:00.827711    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:00.841635    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:00.841722    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:00.853846    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:00.853921    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:00.864642    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:00.864709    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:00.875552    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:00.875612    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:00.886054    9387 logs.go:276] 0 containers: []
	W0327 11:14:00.886065    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:00.886111    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:00.896457    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:00.896476    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:00.896481    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:00.931767    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:00.931777    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:00.947927    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:00.947940    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:00.960323    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:00.960336    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:00.971807    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:00.971821    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:00.988805    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:00.988816    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:00.992777    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:00.992784    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:01.006211    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:01.006222    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:01.045104    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:01.045117    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:01.060042    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:01.060052    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:01.075477    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:01.075488    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:01.100413    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:01.100421    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:01.137727    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:01.137735    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:01.148824    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:01.148834    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:01.163997    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:01.164007    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:01.175440    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:01.175450    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:01.186877    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:01.186888    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:03.701072    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:08.703388    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:08.703538    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:08.722704    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:08.722784    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:08.734932    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:08.735004    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:08.745584    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:08.745655    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:08.756018    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:08.756091    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:08.773190    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:08.773258    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:08.783562    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:08.783630    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:08.793285    9387 logs.go:276] 0 containers: []
	W0327 11:14:08.793298    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:08.793360    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:08.804192    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:08.804212    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:08.804217    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:08.819689    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:08.819698    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:08.830972    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:08.830984    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:08.867885    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:08.867895    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:08.881594    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:08.881603    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:08.893115    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:08.893126    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:08.905352    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:08.905365    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:08.917618    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:08.917629    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:08.942484    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:08.942495    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:08.981175    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:08.981184    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:08.995074    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:08.995088    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:09.008917    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:09.008931    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:09.045174    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:09.045187    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:09.063098    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:09.063108    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:09.077405    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:09.077419    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:09.089597    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:09.089608    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:09.102563    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:09.102578    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:11.609066    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:16.611293    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:16.611562    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:16.636348    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:16.636478    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:16.656325    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:16.656403    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:16.670288    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:16.670361    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:16.681847    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:16.681920    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:16.693161    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:16.693242    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:16.708917    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:16.708996    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:16.719696    9387 logs.go:276] 0 containers: []
	W0327 11:14:16.719706    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:16.719758    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:16.733398    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:16.733417    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:16.733422    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:16.748171    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:16.748182    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:16.759144    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:16.759156    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:16.763456    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:16.763463    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:16.797978    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:16.797991    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:16.818917    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:16.818931    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:16.833505    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:16.833514    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:16.846121    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:16.846134    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:16.861361    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:16.861371    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:16.878178    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:16.878191    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:16.889648    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:16.889659    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:16.913876    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:16.913886    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:16.952621    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:16.952630    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:16.989844    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:16.989855    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:17.007719    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:17.007729    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:17.019532    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:17.019543    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:17.032096    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:17.032110    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:19.545601    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:24.547850    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:24.548027    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:24.567780    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:24.567872    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:24.581679    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:24.581759    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:24.593305    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:24.593374    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:24.604403    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:24.604472    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:24.614600    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:24.614672    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:24.626202    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:24.626274    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:24.636601    9387 logs.go:276] 0 containers: []
	W0327 11:14:24.636612    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:24.636669    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:24.648345    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:24.648367    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:24.648374    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:24.666222    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:24.666233    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:24.704103    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:24.704111    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:24.708177    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:24.708183    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:24.722798    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:24.722808    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:24.746929    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:24.746937    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:24.761742    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:24.761752    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:24.779360    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:24.779372    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:24.798915    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:24.798929    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:24.841308    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:24.841321    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:24.859323    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:24.859333    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:24.871036    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:24.871049    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:24.882772    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:24.882783    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:24.896861    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:24.896872    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:24.907863    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:24.907875    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:24.951423    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:24.951438    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:24.965720    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:24.965731    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:27.482643    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:32.485444    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:32.485781    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:32.516083    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:32.516213    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:32.535341    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:32.535425    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:32.549424    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:32.549508    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:32.561537    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:32.561602    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:32.572100    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:32.572156    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:32.583268    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:32.583338    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:32.594177    9387 logs.go:276] 0 containers: []
	W0327 11:14:32.594191    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:32.594244    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:32.604971    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:32.604988    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:32.604995    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:32.616668    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:32.616679    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:32.627824    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:32.627838    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:32.643258    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:32.643271    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:32.647328    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:32.647338    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:32.683827    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:32.683839    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:32.698424    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:32.698436    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:32.712583    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:32.712592    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:32.726948    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:32.726958    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:32.738102    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:32.738114    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:32.749880    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:32.749891    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:32.774938    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:32.774950    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:32.813797    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:32.813809    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:32.855264    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:32.855274    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:32.867931    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:32.867944    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:32.883160    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:32.883170    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:32.897503    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:32.897515    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:35.417593    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:40.419937    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:40.420231    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:40.448535    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:40.448639    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:40.470487    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:40.470567    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:40.483472    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:40.483545    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:40.495705    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:40.495774    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:40.505681    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:40.505740    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:40.515943    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:40.516000    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:40.526082    9387 logs.go:276] 0 containers: []
	W0327 11:14:40.526093    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:40.526143    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:40.536748    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:40.536768    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:40.536775    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:40.541143    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:40.541148    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:40.555421    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:40.555432    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:40.567485    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:40.567495    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:40.590328    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:40.590336    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:40.603660    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:40.603670    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:40.638913    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:40.638925    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:40.676989    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:40.676999    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:40.688451    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:40.688465    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:40.699997    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:40.700006    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:40.713767    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:40.713778    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:40.725238    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:40.725248    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:40.742027    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:40.742038    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:40.756180    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:40.756190    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:40.767415    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:40.767427    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:40.805683    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:40.805692    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:40.820175    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:40.820189    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:43.337297    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:48.338565    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:48.339231    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:48.374946    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:48.375084    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:48.396576    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:48.396682    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:48.412066    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:48.412148    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:48.424176    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:48.424247    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:48.434845    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:48.434907    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:48.445502    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:48.445572    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:48.455702    9387 logs.go:276] 0 containers: []
	W0327 11:14:48.455715    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:48.455772    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:48.466352    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:48.466373    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:48.466379    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:48.504887    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:48.504897    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:48.516559    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:48.516570    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:48.529751    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:48.529762    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:48.542191    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:48.542202    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:48.564877    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:48.564885    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:48.601491    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:48.601499    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:48.636755    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:48.636768    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:48.652576    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:48.652587    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:48.663508    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:48.663518    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:48.678127    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:48.678138    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:48.696080    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:48.696093    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:48.715743    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:48.715753    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:48.719914    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:48.719923    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:48.733393    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:48.733403    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:48.751127    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:48.751136    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:48.763187    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:48.763196    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:51.280272    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:14:56.282904    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:14:56.283076    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:14:56.301767    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:14:56.301851    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:14:56.315620    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:14:56.315694    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:14:56.326816    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:14:56.326882    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:14:56.339105    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:14:56.339179    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:14:56.349538    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:14:56.349605    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:14:56.360259    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:14:56.360326    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:14:56.370232    9387 logs.go:276] 0 containers: []
	W0327 11:14:56.370248    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:14:56.370304    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:14:56.381278    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:14:56.381298    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:14:56.381303    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:14:56.418227    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:14:56.418238    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:14:56.432382    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:14:56.432391    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:14:56.443328    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:14:56.443343    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:14:56.447628    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:14:56.447635    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:14:56.458855    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:14:56.458866    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:14:56.471202    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:14:56.471215    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:14:56.485977    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:14:56.485990    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:14:56.503626    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:14:56.503638    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:14:56.526018    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:14:56.526025    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:14:56.540132    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:14:56.540143    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:14:56.551356    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:14:56.551367    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:14:56.568756    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:14:56.568767    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:14:56.583941    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:14:56.583951    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:14:56.596679    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:14:56.596691    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:14:56.634419    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:14:56.634430    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:14:56.676223    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:14:56.676233    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:14:59.191114    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:04.193334    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:04.193448    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:04.207016    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:04.207100    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:04.220029    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:04.220100    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:04.232169    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:04.232244    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:04.243567    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:04.243647    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:04.255482    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:04.255557    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:04.275162    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:04.275234    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:04.286847    9387 logs.go:276] 0 containers: []
	W0327 11:15:04.286859    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:04.286923    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:04.298511    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:04.298529    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:04.298534    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:04.312017    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:04.312028    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:04.323987    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:04.323999    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:04.346266    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:04.346286    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:04.361001    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:04.361013    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:04.373041    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:04.373056    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:04.385190    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:04.385201    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:04.389830    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:04.389837    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:04.410716    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:04.410726    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:04.425158    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:04.425169    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:04.436122    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:04.436133    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:04.451251    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:04.451261    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:04.468765    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:04.468776    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:04.483270    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:04.483280    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:04.520874    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:04.520887    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:04.556340    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:04.556351    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:04.594014    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:04.594027    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:07.119097    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:12.121322    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:12.121525    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:12.138576    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:12.138662    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:12.150525    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:12.150593    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:12.165523    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:12.165585    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:12.176509    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:12.176574    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:12.186556    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:12.186621    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:12.196674    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:12.196738    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:12.207073    9387 logs.go:276] 0 containers: []
	W0327 11:15:12.207084    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:12.207137    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:12.217344    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:12.217360    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:12.217365    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:12.229093    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:12.229105    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:12.240807    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:12.240819    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:12.264604    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:12.264614    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:12.300347    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:12.300361    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:12.314321    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:12.314332    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:12.328624    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:12.328633    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:12.342496    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:12.342511    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:12.357729    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:12.357738    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:12.362208    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:12.362216    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:12.377756    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:12.377769    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:12.391473    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:12.391483    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:12.403093    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:12.403103    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:12.439845    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:12.439857    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:12.451310    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:12.451320    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:12.468415    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:12.468426    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:12.479102    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:12.479113    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:15.023513    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:20.026054    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:20.026254    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:20.042972    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:20.043060    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:20.056269    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:20.056352    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:20.067286    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:20.067355    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:20.078744    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:20.078817    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:20.092123    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:20.092191    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:20.102767    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:20.102826    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:20.112813    9387 logs.go:276] 0 containers: []
	W0327 11:15:20.112825    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:20.112881    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:20.127988    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:20.128008    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:20.128015    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:20.163894    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:20.163904    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:20.178131    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:20.178141    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:20.195494    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:20.195505    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:20.213541    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:20.213552    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:20.231865    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:20.231875    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:20.261016    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:20.261038    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:20.278565    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:20.278577    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:20.294622    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:20.294632    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:20.306127    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:20.306140    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:20.318439    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:20.318450    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:20.329500    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:20.329512    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:20.343157    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:20.343168    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:20.366400    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:20.366408    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:20.403357    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:20.403363    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:20.414944    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:20.414954    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:20.419093    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:20.419103    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:22.958846    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:27.960328    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:27.960698    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:27.992291    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:27.992415    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:28.011407    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:28.011498    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:28.025661    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:28.025738    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:28.037395    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:28.037464    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:28.048456    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:28.048523    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:28.059112    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:28.059183    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:28.069228    9387 logs.go:276] 0 containers: []
	W0327 11:15:28.069240    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:28.069300    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:28.080236    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:28.080255    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:28.080260    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:28.094755    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:28.094771    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:28.109759    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:28.109770    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:28.121181    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:28.121193    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:28.133842    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:28.133852    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:28.148904    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:28.148915    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:28.160265    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:28.160276    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:28.197176    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:28.197185    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:28.231999    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:28.232009    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:28.254998    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:28.255004    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:28.266915    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:28.266925    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:28.283969    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:28.283979    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:28.322625    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:28.322636    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:28.337861    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:28.337872    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:28.351158    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:28.351173    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:28.365986    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:28.365998    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:28.377178    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:28.377188    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:30.883679    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:35.884458    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:35.884699    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:35.905245    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:35.905338    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:35.920199    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:35.920279    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:35.932679    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:35.932742    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:35.943160    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:35.943234    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:35.953354    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:35.953426    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:35.964023    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:35.964090    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:35.978100    9387 logs.go:276] 0 containers: []
	W0327 11:15:35.978111    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:35.978164    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:35.988783    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:35.988801    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:35.988809    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:35.993379    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:35.993385    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:36.008804    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:36.008813    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:36.023116    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:36.023127    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:36.036329    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:36.036341    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:36.054819    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:36.054829    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:36.091427    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:36.091441    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:36.105617    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:36.105631    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:36.117882    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:36.117895    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:36.135784    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:36.135798    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:36.157587    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:36.157594    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:36.194179    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:36.194193    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:36.208247    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:36.208262    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:36.222755    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:36.222770    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:36.240269    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:36.240279    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:36.251437    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:36.251446    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:36.288140    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:36.288150    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:38.802092    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:43.803192    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:43.803485    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:43.821152    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:43.821240    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:43.834375    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:43.834446    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:43.845073    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:43.845138    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:43.855532    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:43.855596    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:43.867405    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:43.867468    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:43.877649    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:43.877715    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:43.887962    9387 logs.go:276] 0 containers: []
	W0327 11:15:43.887975    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:43.888037    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:43.898373    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:43.898393    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:43.898398    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:43.912810    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:43.912824    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:43.925936    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:43.925951    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:43.937562    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:43.937575    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:43.941693    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:43.941699    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:43.975029    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:43.975044    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:43.992214    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:43.992228    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:44.003996    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:44.004006    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:44.040975    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:44.040984    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:44.052346    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:44.052357    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:44.074892    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:44.074899    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:44.087286    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:44.087297    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:44.128623    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:44.128633    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:44.140845    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:44.140859    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:44.156013    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:44.156028    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:44.173858    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:44.173868    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:44.190299    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:44.190309    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:46.712360    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:51.714704    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:51.714883    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:15:51.729752    9387 logs.go:276] 2 containers: [d6afc820712f cef10ae8ee0a]
	I0327 11:15:51.729827    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:15:51.741908    9387 logs.go:276] 2 containers: [3b9863a0b696 564c237001b0]
	I0327 11:15:51.741972    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:15:51.752297    9387 logs.go:276] 1 containers: [f5130263cf8e]
	I0327 11:15:51.752368    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:15:51.762788    9387 logs.go:276] 2 containers: [0e188110832d 7d1480bb2f78]
	I0327 11:15:51.762853    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:15:51.773104    9387 logs.go:276] 1 containers: [0b5e7b7db5c4]
	I0327 11:15:51.773175    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:15:51.786386    9387 logs.go:276] 2 containers: [c41bae31de84 0dc5ad39c5a3]
	I0327 11:15:51.786462    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:15:51.797481    9387 logs.go:276] 0 containers: []
	W0327 11:15:51.797492    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:15:51.797549    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:15:51.808776    9387 logs.go:276] 2 containers: [f6e9e9a3150c 212606ee2dcb]
	I0327 11:15:51.808794    9387 logs.go:123] Gathering logs for kube-apiserver [d6afc820712f] ...
	I0327 11:15:51.808801    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6afc820712f"
	I0327 11:15:51.823609    9387 logs.go:123] Gathering logs for kube-scheduler [7d1480bb2f78] ...
	I0327 11:15:51.823620    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1480bb2f78"
	I0327 11:15:51.838846    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:15:51.838860    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:15:51.852791    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:15:51.852802    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:15:51.890341    9387 logs.go:123] Gathering logs for etcd [3b9863a0b696] ...
	I0327 11:15:51.890349    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b9863a0b696"
	I0327 11:15:51.907819    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:15:51.907830    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:15:51.930578    9387 logs.go:123] Gathering logs for storage-provisioner [212606ee2dcb] ...
	I0327 11:15:51.930593    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 212606ee2dcb"
	I0327 11:15:51.949148    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:15:51.949160    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:15:51.953712    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:15:51.953718    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:15:51.987927    9387 logs.go:123] Gathering logs for kube-apiserver [cef10ae8ee0a] ...
	I0327 11:15:51.987941    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef10ae8ee0a"
	I0327 11:15:52.026272    9387 logs.go:123] Gathering logs for kube-proxy [0b5e7b7db5c4] ...
	I0327 11:15:52.026284    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5e7b7db5c4"
	I0327 11:15:52.044646    9387 logs.go:123] Gathering logs for kube-controller-manager [c41bae31de84] ...
	I0327 11:15:52.044661    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41bae31de84"
	I0327 11:15:52.062396    9387 logs.go:123] Gathering logs for kube-controller-manager [0dc5ad39c5a3] ...
	I0327 11:15:52.062407    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc5ad39c5a3"
	I0327 11:15:52.081752    9387 logs.go:123] Gathering logs for etcd [564c237001b0] ...
	I0327 11:15:52.081765    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 564c237001b0"
	I0327 11:15:52.096610    9387 logs.go:123] Gathering logs for coredns [f5130263cf8e] ...
	I0327 11:15:52.096622    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5130263cf8e"
	I0327 11:15:52.108263    9387 logs.go:123] Gathering logs for kube-scheduler [0e188110832d] ...
	I0327 11:15:52.108273    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e188110832d"
	I0327 11:15:52.120433    9387 logs.go:123] Gathering logs for storage-provisioner [f6e9e9a3150c] ...
	I0327 11:15:52.120443    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6e9e9a3150c"
	I0327 11:15:54.632362    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:15:59.634577    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:15:59.634662    9387 kubeadm.go:591] duration metric: took 4m4.099147625s to restartPrimaryControlPlane
	W0327 11:15:59.634716    9387 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 11:15:59.634736    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 11:16:00.657016    9387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.022273458s)
	I0327 11:16:00.657094    9387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 11:16:00.662069    9387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 11:16:00.664758    9387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 11:16:00.667625    9387 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 11:16:00.667631    9387 kubeadm.go:156] found existing configuration files:
	
	I0327 11:16:00.667664    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/admin.conf
	I0327 11:16:00.670227    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 11:16:00.670248    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 11:16:00.672798    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/kubelet.conf
	I0327 11:16:00.675885    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 11:16:00.675911    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 11:16:00.679231    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/controller-manager.conf
	I0327 11:16:00.681776    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 11:16:00.681800    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 11:16:00.684323    9387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/scheduler.conf
	I0327 11:16:00.687451    9387 kubeadm.go:162] "https://control-plane.minikube.internal:51357" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51357 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 11:16:00.687475    9387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 11:16:00.690836    9387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 11:16:00.708783    9387 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 11:16:00.708817    9387 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 11:16:00.756972    9387 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 11:16:00.757026    9387 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 11:16:00.757070    9387 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 11:16:00.807434    9387 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 11:16:00.812630    9387 out.go:204]   - Generating certificates and keys ...
	I0327 11:16:00.812666    9387 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 11:16:00.812707    9387 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 11:16:00.812754    9387 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 11:16:00.812787    9387 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 11:16:00.812826    9387 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 11:16:00.812852    9387 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 11:16:00.812888    9387 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 11:16:00.812923    9387 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 11:16:00.812963    9387 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 11:16:00.813010    9387 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 11:16:00.813031    9387 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 11:16:00.813065    9387 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 11:16:00.972370    9387 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 11:16:01.194276    9387 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 11:16:01.240712    9387 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 11:16:01.327050    9387 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 11:16:01.354652    9387 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 11:16:01.355059    9387 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 11:16:01.355097    9387 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 11:16:01.443156    9387 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 11:16:01.450221    9387 out.go:204]   - Booting up control plane ...
	I0327 11:16:01.450276    9387 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 11:16:01.450313    9387 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 11:16:01.450349    9387 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 11:16:01.450393    9387 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 11:16:01.450473    9387 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 11:16:05.953095    9387 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503668 seconds
	I0327 11:16:05.953366    9387 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 11:16:05.958885    9387 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 11:16:06.468319    9387 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 11:16:06.468427    9387 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-042000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 11:16:06.972666    9387 kubeadm.go:309] [bootstrap-token] Using token: r3uoc4.gdgrt46bsfuxq20p
	I0327 11:16:06.975287    9387 out.go:204]   - Configuring RBAC rules ...
	I0327 11:16:06.975340    9387 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 11:16:06.975380    9387 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 11:16:06.982635    9387 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 11:16:06.983484    9387 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 11:16:06.984358    9387 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 11:16:06.985210    9387 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 11:16:06.988348    9387 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 11:16:07.136529    9387 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 11:16:07.376507    9387 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 11:16:07.377007    9387 kubeadm.go:309] 
	I0327 11:16:07.377039    9387 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 11:16:07.377041    9387 kubeadm.go:309] 
	I0327 11:16:07.377076    9387 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 11:16:07.377078    9387 kubeadm.go:309] 
	I0327 11:16:07.377137    9387 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 11:16:07.377177    9387 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 11:16:07.377229    9387 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 11:16:07.377232    9387 kubeadm.go:309] 
	I0327 11:16:07.377257    9387 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 11:16:07.377260    9387 kubeadm.go:309] 
	I0327 11:16:07.377286    9387 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 11:16:07.377289    9387 kubeadm.go:309] 
	I0327 11:16:07.377315    9387 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 11:16:07.377361    9387 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 11:16:07.377398    9387 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 11:16:07.377403    9387 kubeadm.go:309] 
	I0327 11:16:07.377451    9387 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 11:16:07.377502    9387 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 11:16:07.377508    9387 kubeadm.go:309] 
	I0327 11:16:07.377549    9387 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r3uoc4.gdgrt46bsfuxq20p \
	I0327 11:16:07.377605    9387 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:45ef07f6bc58c462e2dd74e42d978dd37f899f74c2cb02d53ec43ace0e53a157 \
	I0327 11:16:07.377620    9387 kubeadm.go:309] 	--control-plane 
	I0327 11:16:07.377624    9387 kubeadm.go:309] 
	I0327 11:16:07.377669    9387 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 11:16:07.377673    9387 kubeadm.go:309] 
	I0327 11:16:07.377717    9387 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r3uoc4.gdgrt46bsfuxq20p \
	I0327 11:16:07.377782    9387 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:45ef07f6bc58c462e2dd74e42d978dd37f899f74c2cb02d53ec43ace0e53a157 
	I0327 11:16:07.378007    9387 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 11:16:07.378015    9387 cni.go:84] Creating CNI manager for ""
	I0327 11:16:07.378024    9387 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:16:07.384458    9387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 11:16:07.388605    9387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 11:16:07.391525    9387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 11:16:07.397357    9387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 11:16:07.397447    9387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 11:16:07.397483    9387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-042000 minikube.k8s.io/updated_at=2024_03_27T11_16_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28 minikube.k8s.io/name=stopped-upgrade-042000 minikube.k8s.io/primary=true
	I0327 11:16:07.400569    9387 ops.go:34] apiserver oom_adj: -16
	I0327 11:16:07.446947    9387 kubeadm.go:1107] duration metric: took 49.541542ms to wait for elevateKubeSystemPrivileges
	W0327 11:16:07.446973    9387 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 11:16:07.446982    9387 kubeadm.go:393] duration metric: took 4m11.925216041s to StartCluster
	I0327 11:16:07.446992    9387 settings.go:142] acquiring lock: {Name:mkcc43d09d3f5881bfa353312da863c5f99807b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:16:07.447076    9387 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:16:07.447501    9387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/kubeconfig: {Name:mkbb2a293ed6f55faa985236f49efd479958575e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:16:07.447712    9387 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:16:07.451485    9387 out.go:177] * Verifying Kubernetes components...
	I0327 11:16:07.447720    9387 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 11:16:07.447792    9387 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:16:07.458373    9387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 11:16:07.458387    9387 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-042000"
	I0327 11:16:07.458401    9387 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-042000"
	W0327 11:16:07.458404    9387 addons.go:243] addon storage-provisioner should already be in state true
	I0327 11:16:07.458418    9387 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-042000"
	I0327 11:16:07.458421    9387 host.go:66] Checking if "stopped-upgrade-042000" exists ...
	I0327 11:16:07.458429    9387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-042000"
	I0327 11:16:07.459926    9387 kapi.go:59] client config for stopped-upgrade-042000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/stopped-upgrade-042000/client.key", CAFile:"/Users/jenkins/minikube-integration/18517-6013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060d6b90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 11:16:07.460042    9387 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-042000"
	W0327 11:16:07.460047    9387 addons.go:243] addon default-storageclass should already be in state true
	I0327 11:16:07.460055    9387 host.go:66] Checking if "stopped-upgrade-042000" exists ...
	I0327 11:16:07.464328    9387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 11:16:07.467490    9387 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 11:16:07.467498    9387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 11:16:07.467505    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:16:07.468140    9387 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 11:16:07.468145    9387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 11:16:07.468148    9387 sshutil.go:53] new ssh client: &{IP:localhost Port:51322 SSHKeyPath:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/stopped-upgrade-042000/id_rsa Username:docker}
	I0327 11:16:07.548967    9387 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 11:16:07.554167    9387 api_server.go:52] waiting for apiserver process to appear ...
	I0327 11:16:07.554210    9387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 11:16:07.558097    9387 api_server.go:72] duration metric: took 110.372334ms to wait for apiserver process to appear ...
	I0327 11:16:07.558105    9387 api_server.go:88] waiting for apiserver healthz status ...
	I0327 11:16:07.558112    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:07.616938    9387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 11:16:07.617811    9387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 11:16:12.560228    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:12.560271    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:17.560612    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:17.560657    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:22.560990    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:22.561030    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:27.561475    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:27.561505    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:32.562117    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:32.562161    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:37.562941    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:37.562984    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 11:16:37.979486    9387 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 11:16:37.983896    9387 out.go:177] * Enabled addons: storage-provisioner
	I0327 11:16:37.994889    9387 addons.go:505] duration metric: took 30.547374833s for enable addons: enabled=[storage-provisioner]
	I0327 11:16:42.563946    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:42.563972    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:47.565176    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:47.565196    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:52.566496    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:52.566525    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:16:57.568390    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:16:57.568440    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:17:02.570636    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:17:02.570680    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:17:07.572954    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:17:07.573121    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:17:07.583712    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:17:07.583782    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:17:07.593947    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:17:07.594025    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:17:07.604266    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:17:07.604335    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:17:07.614671    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:17:07.614735    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:17:07.624900    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:17:07.624966    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:17:07.635036    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:17:07.635107    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:17:07.650541    9387 logs.go:276] 0 containers: []
	W0327 11:17:07.650552    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:17:07.650611    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:17:07.660888    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:17:07.660902    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:17:07.660907    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:17:07.665779    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:17:07.665786    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:17:07.699330    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:17:07.699341    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:17:07.714735    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:17:07.714752    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:17:07.726352    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:17:07.726365    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:17:07.737562    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:17:07.737572    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:17:07.762332    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:17:07.762340    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:17:07.774078    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:17:07.774100    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:17:07.808472    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:17:07.808482    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:17:07.823761    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:17:07.823773    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:17:07.837754    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:17:07.837764    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:17:07.849364    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:17:07.849377    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:17:07.866928    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:17:07.866938    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:17:10.380274    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:17:15.382610    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:17:15.382970    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:17:15.409476    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:17:15.409599    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:17:15.427044    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:17:15.427140    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:17:15.440552    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:17:15.440624    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:17:15.452309    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:17:15.452372    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:17:15.462587    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:17:15.462653    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:17:15.473237    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:17:15.473300    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:17:15.483399    9387 logs.go:276] 0 containers: []
	W0327 11:17:15.483412    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:17:15.483465    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:17:15.493971    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:17:15.493987    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:17:15.493992    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:17:15.506734    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:17:15.506745    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:17:15.517996    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:17:15.518008    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:17:15.551608    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:17:15.551620    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:17:15.563107    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:17:15.563122    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:17:15.575499    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:17:15.575510    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:17:15.591481    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:17:15.591494    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:17:15.603419    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:17:15.603434    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:17:15.620220    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:17:15.620235    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:17:15.625033    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:17:15.625039    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:17:15.659744    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:17:15.659758    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:17:15.674084    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:17:15.674096    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:17:15.689296    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:17:15.689309    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:17:18.214857    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:17:23.217112    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:17:23.217457    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:17:23.251579    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:17:23.251719    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:17:23.269132    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:17:23.269224    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:17:23.283366    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:17:23.283442    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:17:23.295650    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:17:23.295723    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:17:23.306092    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:17:23.306159    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:17:23.323085    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:17:23.323152    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:17:23.333324    9387 logs.go:276] 0 containers: []
	W0327 11:17:23.333341    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:17:23.333394    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:17:23.343434    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:17:23.343449    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:17:23.343455    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:17:23.377617    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:17:23.377626    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:17:23.381976    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:17:23.381983    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:17:23.415593    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:17:23.415606    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:17:23.429229    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:17:23.429239    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:17:23.440980    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:17:23.440990    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:17:23.456068    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:17:23.456079    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:17:23.468066    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:17:23.468080    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:17:23.485023    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:17:23.485034    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:17:23.496974    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:17:23.496986    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:17:23.511395    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:17:23.511409    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:17:23.523155    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:17:23.523170    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:17:23.546747    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:17:23.546755    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:17:26.058512    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:17:31.059425    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:17:31.059859    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:17:31.090571    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:17:31.090699    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:17:31.109776    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:17:31.109867    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:17:31.123301    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:17:31.123377    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:17:31.135117    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:17:31.135193    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:17:31.145427    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:17:31.145497    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:17:31.155971    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:17:31.156040    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:17:31.165889    9387 logs.go:276] 0 containers: []
	W0327 11:17:31.165900    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:17:31.165952    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:17:31.176106    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:17:31.176124    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:17:31.176128    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:17:31.213131    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:17:31.213140    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:17:31.233802    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:17:31.233815    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:17:31.248632    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:17:31.248644    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:17:31.261037    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:17:31.261049    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:17:31.272028    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:17:31.272040    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:17:31.295079    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:17:31.295087    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:17:31.328626    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:17:31.328637    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:17:31.333529    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:17:31.333539    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:17:31.347438    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:17:31.347451    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:17:31.362780    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:17:31.362791    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:17:31.373829    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:17:31.373839    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:17:31.391012    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:17:31.391025    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:17:33.905335    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:17:38.907842    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:17:38.908255    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:17:38.950228    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:17:38.950353    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:17:38.972332    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:17:38.972427    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:17:38.995230    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:17:38.995290    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:17:39.006590    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:17:39.006659    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:17:39.017044    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:17:39.017108    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:17:39.032268    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:17:39.032334    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:17:39.043571    9387 logs.go:276] 0 containers: []
	W0327 11:17:39.043585    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:17:39.043648    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:17:39.053826    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:17:39.053844    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:17:39.053849    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:17:39.067497    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:17:39.067507    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:17:39.082932    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:17:39.082946    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:17:39.096701    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:17:39.096712    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:17:39.110900    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:17:39.110912    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:17:39.122474    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:17:39.122486    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:17:39.136360    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:17:39.136369    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:17:39.149778    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:17:39.149791    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:17:39.167628    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:17:39.167638    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:17:39.201040    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:17:39.201049    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:17:39.204971    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:17:39.204978    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:17:39.244091    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:17:39.244102    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:17:39.255569    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:17:39.255583    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:17:41.781953    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:17:46.784854    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:17:46.785239    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:17:46.825033    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:17:46.825159    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:17:46.846392    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:17:46.846507    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:17:46.861538    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:17:46.861611    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:17:46.876099    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:17:46.876162    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:17:46.891062    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:17:46.891122    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:17:46.901526    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:17:46.901593    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:17:46.911690    9387 logs.go:276] 0 containers: []
	W0327 11:17:46.911702    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:17:46.911761    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:17:46.921819    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:17:46.921835    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:17:46.921840    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:17:46.926088    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:17:46.926096    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:17:46.940159    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:17:46.940171    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:17:46.951318    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:17:46.951331    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:17:46.962408    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:17:46.962422    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:17:46.973512    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:17:46.973524    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:17:47.008320    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:17:47.008328    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:17:47.042651    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:17:47.042662    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:17:47.057621    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:17:47.057632    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:17:47.078027    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:17:47.078039    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:17:47.089805    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:17:47.089820    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:17:47.106958    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:17:47.106968    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:17:47.131415    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:17:47.131422    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:17:49.644375    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:17:54.646765    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:17:54.647174    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:17:54.683834    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:17:54.683952    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:17:54.703246    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:17:54.703339    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:17:54.717627    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:17:54.717690    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:17:54.729623    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:17:54.729693    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:17:54.740846    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:17:54.740904    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:17:54.752007    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:17:54.752078    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:17:54.761907    9387 logs.go:276] 0 containers: []
	W0327 11:17:54.761923    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:17:54.761978    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:17:54.772107    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:17:54.772121    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:17:54.772127    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:17:54.808707    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:17:54.808722    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:17:54.823632    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:17:54.823645    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:17:54.842338    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:17:54.842351    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:17:54.870745    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:17:54.870758    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:17:54.894752    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:17:54.894760    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:17:54.907953    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:17:54.907968    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:17:54.941253    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:17:54.941263    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:17:54.953080    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:17:54.953093    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:17:54.967480    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:17:54.967490    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:17:54.978973    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:17:54.978985    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:17:54.995707    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:17:54.995717    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:17:55.006616    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:17:55.006625    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:17:57.512739    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:18:02.515102    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:18:02.515318    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:18:02.542146    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:18:02.542291    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:18:02.561016    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:18:02.561098    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:18:02.574843    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:18:02.574912    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:18:02.587180    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:18:02.587247    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:18:02.597667    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:18:02.597736    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:18:02.607790    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:18:02.607854    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:18:02.617636    9387 logs.go:276] 0 containers: []
	W0327 11:18:02.617645    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:18:02.617692    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:18:02.627895    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:18:02.627910    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:18:02.627915    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:18:02.639074    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:18:02.639084    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:18:02.652962    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:18:02.652971    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:18:02.663975    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:18:02.663987    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:18:02.687241    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:18:02.687251    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:18:02.698097    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:18:02.698110    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:18:02.702182    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:18:02.702191    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:18:02.717368    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:18:02.717379    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:18:02.730699    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:18:02.730712    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:18:02.742397    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:18:02.742409    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:18:02.759716    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:18:02.759726    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:18:02.794407    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:18:02.794417    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:18:02.828822    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:18:02.828836    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:18:05.342364    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:18:10.343975    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:18:10.344215    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:18:10.371162    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:18:10.371277    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:18:10.388479    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:18:10.388553    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:18:10.402600    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:18:10.402660    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:18:10.417702    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:18:10.417773    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:18:10.427948    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:18:10.428020    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:18:10.438357    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:18:10.438432    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:18:10.448336    9387 logs.go:276] 0 containers: []
	W0327 11:18:10.448349    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:18:10.448409    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:18:10.458474    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:18:10.458490    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:18:10.458497    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:18:10.462764    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:18:10.462771    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:18:10.474546    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:18:10.474556    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:18:10.486111    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:18:10.486120    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:18:10.500715    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:18:10.500726    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:18:10.512382    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:18:10.512395    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:18:10.537038    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:18:10.537045    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:18:10.571937    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:18:10.571945    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:18:10.605706    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:18:10.605719    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:18:10.619851    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:18:10.619863    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:18:10.633675    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:18:10.633688    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:18:10.650978    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:18:10.650991    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:18:10.663067    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:18:10.663076    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:18:13.177798    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:18:18.180009    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:18:18.180231    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:18:18.204689    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:18:18.204805    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:18:18.220967    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:18:18.221039    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:18:18.233487    9387 logs.go:276] 2 containers: [8ec9b65e5b7b 9753130df36a]
	I0327 11:18:18.233551    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:18:18.244635    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:18:18.244692    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:18:18.255291    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:18:18.255360    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:18:18.265647    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:18:18.265714    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:18:18.277381    9387 logs.go:276] 0 containers: []
	W0327 11:18:18.277398    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:18:18.277453    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:18:18.288118    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:18:18.288132    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:18:18.288137    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:18:18.302052    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:18:18.302063    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:18:18.312768    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:18:18.312781    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:18:18.331422    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:18:18.331433    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:18:18.342961    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:18:18.342974    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:18:18.354780    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:18:18.354793    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:18:18.371569    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:18:18.371580    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:18:18.382833    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:18:18.382846    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:18:18.407588    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:18:18.407606    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:18:18.442629    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:18:18.442653    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:18:18.450973    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:18:18.450985    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:18:18.524647    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:18:18.524661    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:18:18.554971    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:18:18.554986    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:18:21.073389    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:18:26.076072    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:18:26.076440    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:18:26.117545    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:18:26.117684    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:18:26.140439    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:18:26.140549    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:18:26.156217    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:18:26.156297    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:18:26.168576    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:18:26.168644    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:18:26.179382    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:18:26.179446    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:18:26.191487    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:18:26.191552    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:18:26.201742    9387 logs.go:276] 0 containers: []
	W0327 11:18:26.201753    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:18:26.201806    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:18:26.211965    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:18:26.211983    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:18:26.211988    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:18:26.216693    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:18:26.216702    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:18:26.228326    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:18:26.228337    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:18:26.250595    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:18:26.250609    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:18:26.267785    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:18:26.267796    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:18:26.279099    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:18:26.279111    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:18:26.311945    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:18:26.315082    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:18:26.329717    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:18:26.329728    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:18:26.341063    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:18:26.341073    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:18:26.366322    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:18:26.366329    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:18:26.399905    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:18:26.399915    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:18:26.417876    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:18:26.417886    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:18:26.432701    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:18:26.432712    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:18:26.444596    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:18:26.444607    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:18:26.456598    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:18:26.456611    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:18:28.970129    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:18:33.972685    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:18:33.973124    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:18:34.013313    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:18:34.013443    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:18:34.032774    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:18:34.032861    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:18:34.048002    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:18:34.048067    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:18:34.060132    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:18:34.060200    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:18:34.070938    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:18:34.071014    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:18:34.081569    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:18:34.081629    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:18:34.095362    9387 logs.go:276] 0 containers: []
	W0327 11:18:34.095371    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:18:34.095416    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:18:34.105619    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:18:34.105640    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:18:34.105645    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:18:34.117296    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:18:34.117306    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:18:34.129118    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:18:34.129133    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:18:34.140499    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:18:34.140511    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:18:34.164153    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:18:34.164160    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:18:34.181410    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:18:34.181420    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:18:34.199091    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:18:34.199103    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:18:34.210712    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:18:34.210725    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:18:34.244739    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:18:34.244750    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:18:34.249334    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:18:34.249343    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:18:34.288803    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:18:34.288812    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:18:34.300738    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:18:34.300748    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:18:34.327547    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:18:34.327557    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:18:34.359509    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:18:34.359518    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:18:34.375142    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:18:34.375152    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:18:36.888913    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:18:41.890148    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:18:41.890605    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:18:41.931872    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:18:41.932017    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:18:41.954087    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:18:41.954194    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:18:41.969864    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:18:41.969936    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:18:41.983696    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:18:41.983753    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:18:41.994264    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:18:41.994332    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:18:42.004732    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:18:42.004798    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:18:42.015038    9387 logs.go:276] 0 containers: []
	W0327 11:18:42.015049    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:18:42.015106    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:18:42.029569    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:18:42.029587    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:18:42.029592    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:18:42.033836    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:18:42.033845    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:18:42.051093    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:18:42.051105    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:18:42.076291    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:18:42.076299    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:18:42.087594    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:18:42.087607    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:18:42.099202    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:18:42.099211    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:18:42.113426    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:18:42.113438    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:18:42.125413    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:18:42.125426    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:18:42.137046    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:18:42.137058    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:18:42.151236    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:18:42.151246    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:18:42.162165    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:18:42.162177    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:18:42.174042    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:18:42.174053    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:18:42.185469    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:18:42.185480    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:18:42.219432    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:18:42.219440    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:18:42.255010    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:18:42.255019    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:18:44.771861    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:18:49.774571    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:18:49.775286    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:18:49.807896    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:18:49.808016    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:18:49.827545    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:18:49.827626    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:18:49.841551    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:18:49.841633    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:18:49.853226    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:18:49.853295    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:18:49.863620    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:18:49.863686    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:18:49.874758    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:18:49.874831    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:18:49.884932    9387 logs.go:276] 0 containers: []
	W0327 11:18:49.884942    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:18:49.884995    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:18:49.903360    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:18:49.903380    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:18:49.903385    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:18:49.915159    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:18:49.915172    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:18:49.926977    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:18:49.926989    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:18:49.938192    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:18:49.938202    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:18:49.971431    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:18:49.971441    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:18:49.987602    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:18:49.987612    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:18:50.001998    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:18:50.002007    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:18:50.016185    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:18:50.016194    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:18:50.027947    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:18:50.027960    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:18:50.063507    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:18:50.063518    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:18:50.078468    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:18:50.078476    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:18:50.090482    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:18:50.090492    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:18:50.108032    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:18:50.108040    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:18:50.131850    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:18:50.131858    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:18:50.135765    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:18:50.135774    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:18:52.651085    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:18:57.653667    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:18:57.653798    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:18:57.670475    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:18:57.670551    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:18:57.681766    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:18:57.681831    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:18:57.692163    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:18:57.692238    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:18:57.706987    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:18:57.707046    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:18:57.717044    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:18:57.717097    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:18:57.731416    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:18:57.731485    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:18:57.741414    9387 logs.go:276] 0 containers: []
	W0327 11:18:57.741426    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:18:57.741482    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:18:57.751902    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:18:57.751921    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:18:57.751926    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:18:57.772719    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:18:57.772730    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:18:57.790782    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:18:57.790791    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:18:57.815240    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:18:57.815251    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:18:57.849134    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:18:57.849141    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:18:57.884391    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:18:57.884405    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:18:57.898758    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:18:57.898768    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:18:57.910317    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:18:57.910330    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:18:57.924023    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:18:57.924032    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:18:57.935451    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:18:57.935463    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:18:57.947430    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:18:57.947443    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:18:57.951712    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:18:57.951721    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:18:57.963363    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:18:57.963376    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:18:57.975190    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:18:57.975204    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:18:57.986784    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:18:57.986795    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:19:00.499987    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:19:05.502175    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:19:05.502363    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:19:05.515578    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:19:05.515646    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:19:05.526664    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:19:05.526732    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:19:05.537129    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:19:05.537193    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:19:05.547634    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:19:05.547704    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:19:05.558146    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:19:05.558205    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:19:05.569577    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:19:05.569645    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:19:05.585991    9387 logs.go:276] 0 containers: []
	W0327 11:19:05.586002    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:19:05.586054    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:19:05.596831    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:19:05.596847    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:19:05.596853    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:19:05.632211    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:19:05.632222    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:19:05.646990    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:19:05.647001    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:19:05.661550    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:19:05.661560    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:19:05.685165    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:19:05.685176    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:19:05.699560    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:19:05.699571    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:19:05.713268    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:19:05.713278    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:19:05.725226    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:19:05.725242    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:19:05.736647    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:19:05.736660    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:19:05.747923    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:19:05.747935    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:19:05.752442    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:19:05.752451    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:19:05.786133    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:19:05.786146    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:19:05.797389    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:19:05.797401    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:19:05.809181    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:19:05.809189    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:19:05.820801    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:19:05.820814    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:19:08.341930    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:19:13.343538    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:19:13.343617    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:19:13.355403    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:19:13.355470    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:19:13.368246    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:19:13.368297    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:19:13.380019    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:19:13.380068    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:19:13.395862    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:19:13.395947    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:19:13.409414    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:19:13.409473    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:19:13.423017    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:19:13.423059    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:19:13.433378    9387 logs.go:276] 0 containers: []
	W0327 11:19:13.433392    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:19:13.433433    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:19:13.444227    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:19:13.444243    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:19:13.444249    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:19:13.448861    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:19:13.448869    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:19:13.460250    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:19:13.460261    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:19:13.475397    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:19:13.475408    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:19:13.488147    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:19:13.488161    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:19:13.501184    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:19:13.501197    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:19:13.536372    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:19:13.536392    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:19:13.552534    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:19:13.552565    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:19:13.565978    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:19:13.565994    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:19:13.579534    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:19:13.579548    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:19:13.593154    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:19:13.593166    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:19:13.607593    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:19:13.607609    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:19:13.632597    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:19:13.632615    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:19:13.657580    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:19:13.657589    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:19:13.693155    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:19:13.693166    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:19:16.210060    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:19:21.212712    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:19:21.212841    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:19:21.231097    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:19:21.231175    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:19:21.243460    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:19:21.243521    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:19:21.254779    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:19:21.254847    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:19:21.264984    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:19:21.265044    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:19:21.274851    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:19:21.274907    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:19:21.285053    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:19:21.285122    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:19:21.294801    9387 logs.go:276] 0 containers: []
	W0327 11:19:21.294811    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:19:21.294866    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:19:21.307589    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:19:21.307609    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:19:21.307614    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:19:21.322253    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:19:21.322265    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:19:21.333726    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:19:21.333739    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:19:21.357766    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:19:21.357776    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:19:21.369125    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:19:21.369134    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:19:21.391214    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:19:21.391224    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:19:21.402810    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:19:21.402821    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:19:21.437131    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:19:21.437139    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:19:21.441144    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:19:21.441150    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:19:21.477016    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:19:21.477031    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:19:21.489195    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:19:21.489206    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:19:21.503279    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:19:21.503289    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:19:21.514885    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:19:21.514895    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:19:21.526395    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:19:21.526404    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:19:21.539996    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:19:21.540004    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:19:24.054151    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:19:29.056200    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:19:29.056669    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:19:29.095979    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:19:29.096115    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:19:29.118000    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:19:29.118109    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:19:29.138799    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:19:29.138873    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:19:29.150632    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:19:29.150702    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:19:29.161927    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:19:29.161985    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:19:29.172530    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:19:29.172596    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:19:29.183421    9387 logs.go:276] 0 containers: []
	W0327 11:19:29.183432    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:19:29.183490    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:19:29.194006    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:19:29.194023    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:19:29.194028    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:19:29.205779    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:19:29.205790    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:19:29.223370    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:19:29.223381    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:19:29.238489    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:19:29.238501    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:19:29.253138    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:19:29.253147    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:19:29.264586    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:19:29.264595    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:19:29.275945    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:19:29.275954    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:19:29.287642    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:19:29.287656    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:19:29.300013    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:19:29.300025    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:19:29.304708    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:19:29.304717    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:19:29.318763    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:19:29.318775    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:19:29.335074    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:19:29.335087    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:19:29.354650    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:19:29.354662    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:19:29.379479    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:19:29.379487    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:19:29.414340    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:19:29.414346    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:19:31.951379    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:19:36.953728    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:19:36.953834    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:19:36.965475    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:19:36.965545    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:19:36.976340    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:19:36.976407    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:19:36.988585    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:19:36.988642    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:19:36.999527    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:19:36.999581    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:19:37.010772    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:19:37.010844    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:19:37.022229    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:19:37.022284    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:19:37.035004    9387 logs.go:276] 0 containers: []
	W0327 11:19:37.035017    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:19:37.035058    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:19:37.046160    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:19:37.046177    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:19:37.046182    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:19:37.083756    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:19:37.083768    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:19:37.109598    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:19:37.109614    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:19:37.122807    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:19:37.122819    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:19:37.142500    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:19:37.142508    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:19:37.154925    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:19:37.154938    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:19:37.190132    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:19:37.190142    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:19:37.202643    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:19:37.202653    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:19:37.222461    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:19:37.222473    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:19:37.237447    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:19:37.237457    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:19:37.241901    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:19:37.241912    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:19:37.254517    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:19:37.254531    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:19:37.270301    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:19:37.270313    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:19:37.284699    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:19:37.284708    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:19:37.297286    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:19:37.297297    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:19:39.823936    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:19:44.824820    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:19:44.825285    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:19:44.865049    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:19:44.865204    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:19:44.890218    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:19:44.890315    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:19:44.904784    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:19:44.904860    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:19:44.950466    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:19:44.950540    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:19:44.972543    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:19:44.972616    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:19:44.982838    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:19:44.982911    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:19:44.993037    9387 logs.go:276] 0 containers: []
	W0327 11:19:44.993047    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:19:44.993103    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:19:45.002919    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:19:45.002937    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:19:45.002942    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:19:45.037881    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:19:45.037888    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:19:45.041708    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:19:45.041713    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:19:45.053567    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:19:45.053580    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:19:45.137222    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:19:45.137236    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:19:45.150325    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:19:45.150339    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:19:45.167908    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:19:45.167922    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:19:45.181125    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:19:45.181143    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:19:45.207689    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:19:45.207707    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:19:45.230305    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:19:45.230319    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:19:45.245590    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:19:45.245602    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:19:45.257319    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:19:45.257330    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:19:45.277019    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:19:45.277032    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:19:45.292885    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:19:45.292895    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:19:45.304454    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:19:45.304464    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:19:47.820350    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:19:52.822471    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:19:52.822889    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:19:52.862152    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:19:52.862284    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:19:52.883803    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:19:52.883910    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:19:52.899811    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:19:52.899876    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:19:52.912023    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:19:52.912077    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:19:52.923296    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:19:52.923362    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:19:52.938338    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:19:52.938400    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:19:52.948649    9387 logs.go:276] 0 containers: []
	W0327 11:19:52.948660    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:19:52.948719    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:19:52.959457    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:19:52.959478    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:19:52.959483    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:19:52.994116    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:19:52.994128    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:19:53.005551    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:19:53.005562    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:19:53.017083    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:19:53.017096    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:19:53.028654    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:19:53.028667    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:19:53.040102    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:19:53.040115    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:19:53.051290    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:19:53.051302    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:19:53.068881    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:19:53.068896    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:19:53.092688    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:19:53.092695    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:19:53.104393    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:19:53.104404    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:19:53.108630    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:19:53.108636    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:19:53.145159    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:19:53.145172    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:19:53.159274    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:19:53.159286    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:19:53.173200    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:19:53.173212    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:19:53.187604    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:19:53.187616    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:19:55.701214    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:20:00.703465    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:20:00.703550    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 11:20:00.714798    9387 logs.go:276] 1 containers: [e0d465757988]
	I0327 11:20:00.714858    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 11:20:00.726427    9387 logs.go:276] 1 containers: [e890ec827509]
	I0327 11:20:00.726502    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 11:20:00.737597    9387 logs.go:276] 4 containers: [05d2d6cf8ec3 a4cf93097427 8ec9b65e5b7b 9753130df36a]
	I0327 11:20:00.737676    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 11:20:00.749101    9387 logs.go:276] 1 containers: [3335bde7aa53]
	I0327 11:20:00.749158    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 11:20:00.760238    9387 logs.go:276] 1 containers: [4e89fcf32383]
	I0327 11:20:00.760315    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 11:20:00.771753    9387 logs.go:276] 1 containers: [567f2751a6e5]
	I0327 11:20:00.771822    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 11:20:00.789869    9387 logs.go:276] 0 containers: []
	W0327 11:20:00.789882    9387 logs.go:278] No container was found matching "kindnet"
	I0327 11:20:00.789945    9387 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 11:20:00.801046    9387 logs.go:276] 1 containers: [9afd077140af]
	I0327 11:20:00.801065    9387 logs.go:123] Gathering logs for etcd [e890ec827509] ...
	I0327 11:20:00.801071    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e890ec827509"
	I0327 11:20:00.815718    9387 logs.go:123] Gathering logs for coredns [9753130df36a] ...
	I0327 11:20:00.815731    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9753130df36a"
	I0327 11:20:00.828487    9387 logs.go:123] Gathering logs for kubelet ...
	I0327 11:20:00.828499    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 11:20:00.864694    9387 logs.go:123] Gathering logs for dmesg ...
	I0327 11:20:00.864717    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 11:20:00.869407    9387 logs.go:123] Gathering logs for coredns [8ec9b65e5b7b] ...
	I0327 11:20:00.869417    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ec9b65e5b7b"
	I0327 11:20:00.882726    9387 logs.go:123] Gathering logs for kube-scheduler [3335bde7aa53] ...
	I0327 11:20:00.882741    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3335bde7aa53"
	I0327 11:20:00.901335    9387 logs.go:123] Gathering logs for kube-proxy [4e89fcf32383] ...
	I0327 11:20:00.901351    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e89fcf32383"
	I0327 11:20:00.914376    9387 logs.go:123] Gathering logs for container status ...
	I0327 11:20:00.914386    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 11:20:00.927955    9387 logs.go:123] Gathering logs for coredns [a4cf93097427] ...
	I0327 11:20:00.927970    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4cf93097427"
	I0327 11:20:00.940345    9387 logs.go:123] Gathering logs for kube-controller-manager [567f2751a6e5] ...
	I0327 11:20:00.940354    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 567f2751a6e5"
	I0327 11:20:00.961738    9387 logs.go:123] Gathering logs for describe nodes ...
	I0327 11:20:00.961753    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 11:20:01.003783    9387 logs.go:123] Gathering logs for kube-apiserver [e0d465757988] ...
	I0327 11:20:01.003795    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d465757988"
	I0327 11:20:01.020125    9387 logs.go:123] Gathering logs for coredns [05d2d6cf8ec3] ...
	I0327 11:20:01.020136    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d2d6cf8ec3"
	I0327 11:20:01.032403    9387 logs.go:123] Gathering logs for storage-provisioner [9afd077140af] ...
	I0327 11:20:01.032413    9387 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afd077140af"
	I0327 11:20:01.047279    9387 logs.go:123] Gathering logs for Docker ...
	I0327 11:20:01.047291    9387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 11:20:03.573679    9387 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 11:20:08.575239    9387 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 11:20:08.580653    9387 out.go:177] 
	W0327 11:20:08.584590    9387 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0327 11:20:08.584596    9387 out.go:239] * 
	* 
	W0327 11:20:08.585110    9387 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:20:08.592567    9387 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-042000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (580.28s)

                                                
                                    
x
+
TestPause/serial/Start (10.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-466000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-466000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.041586291s)

                                                
                                                
-- stdout --
	* [pause-466000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-466000" primary control-plane node in "pause-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-466000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-466000 -n pause-466000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-466000 -n pause-466000: exit status 7 (58.06975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-466000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 : exit status 80 (9.810336833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-902000" primary control-plane node in "NoKubernetes-902000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-902000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-902000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000: exit status 7 (54.917125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-902000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 : exit status 80 (5.822194709s)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-902000
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-902000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000: exit status 7 (57.772917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-902000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 : exit status 80 (5.827621958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-902000
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-902000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000: exit status 7 (68.365292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-902000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 : exit status 80 (5.844897083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-902000
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-902000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-902000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-902000 -n NoKubernetes-902000: exit status 7 (38.285625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-902000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.834756958s)

                                                
                                                
-- stdout --
	* [auto-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-314000" primary control-plane node in "auto-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:18:36.587384    9788 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:18:36.587518    9788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:18:36.587521    9788 out.go:304] Setting ErrFile to fd 2...
	I0327 11:18:36.587523    9788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:18:36.587639    9788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:18:36.588698    9788 out.go:298] Setting JSON to false
	I0327 11:18:36.604776    9788 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6489,"bootTime":1711557027,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:18:36.604847    9788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:18:36.611008    9788 out.go:177] * [auto-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:18:36.619011    9788 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:18:36.619063    9788 notify.go:220] Checking for updates...
	I0327 11:18:36.625977    9788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:18:36.627342    9788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:18:36.630965    9788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:18:36.634015    9788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:18:36.635357    9788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:18:36.639191    9788 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:18:36.639254    9788 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:18:36.639301    9788 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:18:36.643992    9788 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:18:36.648920    9788 start.go:297] selected driver: qemu2
	I0327 11:18:36.648926    9788 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:18:36.648941    9788 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:18:36.651141    9788 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:18:36.653930    9788 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:18:36.657102    9788 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:18:36.657157    9788 cni.go:84] Creating CNI manager for ""
	I0327 11:18:36.657165    9788 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:18:36.657174    9788 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:18:36.657209    9788 start.go:340] cluster config:
	{Name:auto-314000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPat
h:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:18:36.661334    9788 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:18:36.668932    9788 out.go:177] * Starting "auto-314000" primary control-plane node in "auto-314000" cluster
	I0327 11:18:36.672836    9788 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:18:36.672849    9788 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:18:36.672858    9788 cache.go:56] Caching tarball of preloaded images
	I0327 11:18:36.672904    9788 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:18:36.672913    9788 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:18:36.672975    9788 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/auto-314000/config.json ...
	I0327 11:18:36.672987    9788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/auto-314000/config.json: {Name:mk1c6a127b1153900589b593b164e02b25d486b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:18:36.673284    9788 start.go:360] acquireMachinesLock for auto-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:18:36.673311    9788 start.go:364] duration metric: took 21.666µs to acquireMachinesLock for "auto-314000"
	I0327 11:18:36.673322    9788 start.go:93] Provisioning new machine with config: &{Name:auto-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-314
000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:18:36.673349    9788 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:18:36.677005    9788 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:18:36.691382    9788 start.go:159] libmachine.API.Create for "auto-314000" (driver="qemu2")
	I0327 11:18:36.691411    9788 client.go:168] LocalClient.Create starting
	I0327 11:18:36.691473    9788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:18:36.691500    9788 main.go:141] libmachine: Decoding PEM data...
	I0327 11:18:36.691510    9788 main.go:141] libmachine: Parsing certificate...
	I0327 11:18:36.691554    9788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:18:36.691576    9788 main.go:141] libmachine: Decoding PEM data...
	I0327 11:18:36.691584    9788 main.go:141] libmachine: Parsing certificate...
	I0327 11:18:36.691892    9788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:18:36.845583    9788 main.go:141] libmachine: Creating SSH key...
	I0327 11:18:36.940060    9788 main.go:141] libmachine: Creating Disk image...
	I0327 11:18:36.940067    9788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:18:36.940233    9788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2
	I0327 11:18:36.952964    9788 main.go:141] libmachine: STDOUT: 
	I0327 11:18:36.952984    9788 main.go:141] libmachine: STDERR: 
	I0327 11:18:36.953042    9788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2 +20000M
	I0327 11:18:36.964115    9788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:18:36.964134    9788 main.go:141] libmachine: STDERR: 
	I0327 11:18:36.964148    9788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2
	I0327 11:18:36.964154    9788 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:18:36.964184    9788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:89:77:45:fd:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2
	I0327 11:18:36.966124    9788 main.go:141] libmachine: STDOUT: 
	I0327 11:18:36.966140    9788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:18:36.966158    9788 client.go:171] duration metric: took 274.743292ms to LocalClient.Create
	I0327 11:18:38.968425    9788 start.go:128] duration metric: took 2.295058667s to createHost
	I0327 11:18:38.968517    9788 start.go:83] releasing machines lock for "auto-314000", held for 2.295212125s
	W0327 11:18:38.968586    9788 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:18:38.978810    9788 out.go:177] * Deleting "auto-314000" in qemu2 ...
	W0327 11:18:39.006098    9788 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:18:39.006129    9788 start.go:728] Will try again in 5 seconds ...
	I0327 11:18:44.008405    9788 start.go:360] acquireMachinesLock for auto-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:18:44.008930    9788 start.go:364] duration metric: took 394.375µs to acquireMachinesLock for "auto-314000"
	I0327 11:18:44.008997    9788 start.go:93] Provisioning new machine with config: &{Name:auto-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-314
000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:18:44.009308    9788 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:18:44.018896    9788 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:18:44.070013    9788 start.go:159] libmachine.API.Create for "auto-314000" (driver="qemu2")
	I0327 11:18:44.070070    9788 client.go:168] LocalClient.Create starting
	I0327 11:18:44.070196    9788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:18:44.070261    9788 main.go:141] libmachine: Decoding PEM data...
	I0327 11:18:44.070276    9788 main.go:141] libmachine: Parsing certificate...
	I0327 11:18:44.070336    9788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:18:44.070378    9788 main.go:141] libmachine: Decoding PEM data...
	I0327 11:18:44.070393    9788 main.go:141] libmachine: Parsing certificate...
	I0327 11:18:44.070910    9788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:18:44.234870    9788 main.go:141] libmachine: Creating SSH key...
	I0327 11:18:44.324084    9788 main.go:141] libmachine: Creating Disk image...
	I0327 11:18:44.324092    9788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:18:44.324284    9788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2
	I0327 11:18:44.336913    9788 main.go:141] libmachine: STDOUT: 
	I0327 11:18:44.336928    9788 main.go:141] libmachine: STDERR: 
	I0327 11:18:44.336978    9788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2 +20000M
	I0327 11:18:44.348005    9788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:18:44.348025    9788 main.go:141] libmachine: STDERR: 
	I0327 11:18:44.348038    9788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2
	I0327 11:18:44.348042    9788 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:18:44.348089    9788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:53:a9:ff:93:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/auto-314000/disk.qcow2
	I0327 11:18:44.349906    9788 main.go:141] libmachine: STDOUT: 
	I0327 11:18:44.349921    9788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:18:44.349934    9788 client.go:171] duration metric: took 279.861291ms to LocalClient.Create
	I0327 11:18:46.352093    9788 start.go:128] duration metric: took 2.342763541s to createHost
	I0327 11:18:46.352199    9788 start.go:83] releasing machines lock for "auto-314000", held for 2.3432595s
	W0327 11:18:46.352541    9788 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:18:46.361516    9788 out.go:177] 
	W0327 11:18:46.366671    9788 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:18:46.366690    9788 out.go:239] * 
	* 
	W0327 11:18:46.367929    9788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:18:46.377620    9788 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.854660292s)

                                                
                                                
-- stdout --
	* [flannel-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-314000" primary control-plane node in "flannel-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:18:48.676580    9898 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:18:48.676708    9898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:18:48.676712    9898 out.go:304] Setting ErrFile to fd 2...
	I0327 11:18:48.676715    9898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:18:48.676832    9898 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:18:48.678039    9898 out.go:298] Setting JSON to false
	I0327 11:18:48.694814    9898 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6501,"bootTime":1711557027,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:18:48.694897    9898 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:18:48.700183    9898 out.go:177] * [flannel-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:18:48.708106    9898 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:18:48.712137    9898 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:18:48.708146    9898 notify.go:220] Checking for updates...
	I0327 11:18:48.715132    9898 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:18:48.718131    9898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:18:48.722139    9898 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:18:48.725041    9898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:18:48.728454    9898 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:18:48.728525    9898 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:18:48.728581    9898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:18:48.733119    9898 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:18:48.740062    9898 start.go:297] selected driver: qemu2
	I0327 11:18:48.740067    9898 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:18:48.740072    9898 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:18:48.742450    9898 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:18:48.746025    9898 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:18:48.749161    9898 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:18:48.749197    9898 cni.go:84] Creating CNI manager for "flannel"
	I0327 11:18:48.749201    9898 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0327 11:18:48.749237    9898 start.go:340] cluster config:
	{Name:flannel-314000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client S
ocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:18:48.753399    9898 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:18:48.760915    9898 out.go:177] * Starting "flannel-314000" primary control-plane node in "flannel-314000" cluster
	I0327 11:18:48.765053    9898 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:18:48.765065    9898 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:18:48.765071    9898 cache.go:56] Caching tarball of preloaded images
	I0327 11:18:48.765115    9898 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:18:48.765120    9898 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:18:48.765175    9898 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/flannel-314000/config.json ...
	I0327 11:18:48.765185    9898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/flannel-314000/config.json: {Name:mk6e73dfac12d70dcec63d4d3b93f7a23f856fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:18:48.765383    9898 start.go:360] acquireMachinesLock for flannel-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:18:48.765412    9898 start.go:364] duration metric: took 23.459µs to acquireMachinesLock for "flannel-314000"
	I0327 11:18:48.765424    9898 start.go:93] Provisioning new machine with config: &{Name:flannel-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flann
el-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:18:48.765448    9898 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:18:48.771103    9898 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:18:48.785699    9898 start.go:159] libmachine.API.Create for "flannel-314000" (driver="qemu2")
	I0327 11:18:48.785726    9898 client.go:168] LocalClient.Create starting
	I0327 11:18:48.785792    9898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:18:48.785821    9898 main.go:141] libmachine: Decoding PEM data...
	I0327 11:18:48.785831    9898 main.go:141] libmachine: Parsing certificate...
	I0327 11:18:48.785876    9898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:18:48.785897    9898 main.go:141] libmachine: Decoding PEM data...
	I0327 11:18:48.785903    9898 main.go:141] libmachine: Parsing certificate...
	I0327 11:18:48.786249    9898 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:18:48.938778    9898 main.go:141] libmachine: Creating SSH key...
	I0327 11:18:49.015926    9898 main.go:141] libmachine: Creating Disk image...
	I0327 11:18:49.015934    9898 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:18:49.016088    9898 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2
	I0327 11:18:49.028400    9898 main.go:141] libmachine: STDOUT: 
	I0327 11:18:49.028429    9898 main.go:141] libmachine: STDERR: 
	I0327 11:18:49.028488    9898 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2 +20000M
	I0327 11:18:49.039248    9898 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:18:49.039266    9898 main.go:141] libmachine: STDERR: 
	I0327 11:18:49.039284    9898 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2
	I0327 11:18:49.039289    9898 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:18:49.039318    9898 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:77:b5:52:8c:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2
	I0327 11:18:49.041017    9898 main.go:141] libmachine: STDOUT: 
	I0327 11:18:49.041032    9898 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:18:49.041051    9898 client.go:171] duration metric: took 255.321667ms to LocalClient.Create
	I0327 11:18:51.043296    9898 start.go:128] duration metric: took 2.277832625s to createHost
	I0327 11:18:51.043386    9898 start.go:83] releasing machines lock for "flannel-314000", held for 2.277979042s
	W0327 11:18:51.043452    9898 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:18:51.059540    9898 out.go:177] * Deleting "flannel-314000" in qemu2 ...
	W0327 11:18:51.084384    9898 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:18:51.084414    9898 start.go:728] Will try again in 5 seconds ...
	I0327 11:18:56.086675    9898 start.go:360] acquireMachinesLock for flannel-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:18:56.087143    9898 start.go:364] duration metric: took 347.459µs to acquireMachinesLock for "flannel-314000"
	I0327 11:18:56.087262    9898 start.go:93] Provisioning new machine with config: &{Name:flannel-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flann
el-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:18:56.087576    9898 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:18:56.097162    9898 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:18:56.145184    9898 start.go:159] libmachine.API.Create for "flannel-314000" (driver="qemu2")
	I0327 11:18:56.145238    9898 client.go:168] LocalClient.Create starting
	I0327 11:18:56.145341    9898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:18:56.145399    9898 main.go:141] libmachine: Decoding PEM data...
	I0327 11:18:56.145415    9898 main.go:141] libmachine: Parsing certificate...
	I0327 11:18:56.145483    9898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:18:56.145528    9898 main.go:141] libmachine: Decoding PEM data...
	I0327 11:18:56.145538    9898 main.go:141] libmachine: Parsing certificate...
	I0327 11:18:56.146056    9898 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:18:56.312857    9898 main.go:141] libmachine: Creating SSH key...
	I0327 11:18:56.428518    9898 main.go:141] libmachine: Creating Disk image...
	I0327 11:18:56.428529    9898 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:18:56.428699    9898 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2
	I0327 11:18:56.442928    9898 main.go:141] libmachine: STDOUT: 
	I0327 11:18:56.442963    9898 main.go:141] libmachine: STDERR: 
	I0327 11:18:56.443059    9898 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2 +20000M
	I0327 11:18:56.456772    9898 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:18:56.456792    9898 main.go:141] libmachine: STDERR: 
	I0327 11:18:56.456809    9898 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2
	I0327 11:18:56.456818    9898 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:18:56.456867    9898 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:a6:5e:fd:c3:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/flannel-314000/disk.qcow2
	I0327 11:18:56.459130    9898 main.go:141] libmachine: STDOUT: 
	I0327 11:18:56.459147    9898 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:18:56.459163    9898 client.go:171] duration metric: took 313.921375ms to LocalClient.Create
	I0327 11:18:58.461359    9898 start.go:128] duration metric: took 2.373761625s to createHost
	I0327 11:18:58.461494    9898 start.go:83] releasing machines lock for "flannel-314000", held for 2.374340291s
	W0327 11:18:58.461926    9898 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:18:58.473583    9898 out.go:177] 
	W0327 11:18:58.477781    9898 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:18:58.477815    9898 out.go:239] * 
	* 
	W0327 11:18:58.479450    9898 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:18:58.487706    9898 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.019604708s)

                                                
                                                
-- stdout --
	* [kindnet-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-314000" primary control-plane node in "kindnet-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:19:00.979860   10020 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:19:00.980004   10020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:00.980007   10020 out.go:304] Setting ErrFile to fd 2...
	I0327 11:19:00.980010   10020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:00.980122   10020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:19:00.981469   10020 out.go:298] Setting JSON to false
	I0327 11:19:00.998175   10020 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6513,"bootTime":1711557027,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:19:00.998256   10020 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:19:01.003938   10020 out.go:177] * [kindnet-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:19:01.014995   10020 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:19:01.011895   10020 notify.go:220] Checking for updates...
	I0327 11:19:01.022961   10020 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:19:01.030995   10020 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:19:01.033913   10020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:19:01.038027   10020 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:19:01.041977   10020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:19:01.045395   10020 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:19:01.045471   10020 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:19:01.045528   10020 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:19:01.049906   10020 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:19:01.056955   10020 start.go:297] selected driver: qemu2
	I0327 11:19:01.056960   10020 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:19:01.056965   10020 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:19:01.059317   10020 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:19:01.061961   10020 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:19:01.065992   10020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:19:01.066044   10020 cni.go:84] Creating CNI manager for "kindnet"
	I0327 11:19:01.066048   10020 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 11:19:01.066084   10020 start.go:340] cluster config:
	{Name:kindnet-314000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindnet-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client S
ocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:19:01.070742   10020 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:19:01.078975   10020 out.go:177] * Starting "kindnet-314000" primary control-plane node in "kindnet-314000" cluster
	I0327 11:19:01.082849   10020 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:19:01.082873   10020 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:19:01.082881   10020 cache.go:56] Caching tarball of preloaded images
	I0327 11:19:01.082944   10020 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:19:01.082950   10020 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:19:01.083007   10020 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/kindnet-314000/config.json ...
	I0327 11:19:01.083018   10020 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/kindnet-314000/config.json: {Name:mk6f018bcbc2b49083eb9555779c91c147a028a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:19:01.083295   10020 start.go:360] acquireMachinesLock for kindnet-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:01.083328   10020 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "kindnet-314000"
	I0327 11:19:01.083341   10020 start.go:93] Provisioning new machine with config: &{Name:kindnet-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindn
et-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:01.083373   10020 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:01.086006   10020 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:01.100357   10020 start.go:159] libmachine.API.Create for "kindnet-314000" (driver="qemu2")
	I0327 11:19:01.100385   10020 client.go:168] LocalClient.Create starting
	I0327 11:19:01.100441   10020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:01.100469   10020 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:01.100482   10020 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:01.100524   10020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:01.100546   10020 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:01.100553   10020 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:01.100889   10020 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:01.252200   10020 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:01.383947   10020 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:01.383959   10020 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:01.384108   10020 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2
	I0327 11:19:01.396590   10020 main.go:141] libmachine: STDOUT: 
	I0327 11:19:01.396615   10020 main.go:141] libmachine: STDERR: 
	I0327 11:19:01.396671   10020 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2 +20000M
	I0327 11:19:01.407527   10020 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:01.407544   10020 main.go:141] libmachine: STDERR: 
	I0327 11:19:01.407564   10020 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2
	I0327 11:19:01.407568   10020 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:01.407594   10020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:fc:d3:42:0b:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2
	I0327 11:19:01.409421   10020 main.go:141] libmachine: STDOUT: 
	I0327 11:19:01.409433   10020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:01.409457   10020 client.go:171] duration metric: took 309.069416ms to LocalClient.Create
	I0327 11:19:03.411840   10020 start.go:128] duration metric: took 2.328441125s to createHost
	I0327 11:19:03.411948   10020 start.go:83] releasing machines lock for "kindnet-314000", held for 2.328626917s
	W0327 11:19:03.412012   10020 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:03.425315   10020 out.go:177] * Deleting "kindnet-314000" in qemu2 ...
	W0327 11:19:03.456064   10020 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:03.456095   10020 start.go:728] Will try again in 5 seconds ...
	I0327 11:19:08.458273   10020 start.go:360] acquireMachinesLock for kindnet-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:08.458663   10020 start.go:364] duration metric: took 320.917µs to acquireMachinesLock for "kindnet-314000"
	I0327 11:19:08.458712   10020 start.go:93] Provisioning new machine with config: &{Name:kindnet-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindn
et-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:08.458910   10020 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:08.466514   10020 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:08.505731   10020 start.go:159] libmachine.API.Create for "kindnet-314000" (driver="qemu2")
	I0327 11:19:08.505789   10020 client.go:168] LocalClient.Create starting
	I0327 11:19:08.505892   10020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:08.505975   10020 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:08.505990   10020 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:08.506044   10020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:08.506095   10020 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:08.506110   10020 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:08.506642   10020 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:08.666825   10020 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:08.897682   10020 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:08.897692   10020 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:08.897910   10020 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2
	I0327 11:19:08.911129   10020 main.go:141] libmachine: STDOUT: 
	I0327 11:19:08.911150   10020 main.go:141] libmachine: STDERR: 
	I0327 11:19:08.911218   10020 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2 +20000M
	I0327 11:19:08.922285   10020 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:08.922301   10020 main.go:141] libmachine: STDERR: 
	I0327 11:19:08.922314   10020 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2
	I0327 11:19:08.922319   10020 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:08.922354   10020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:dc:ff:b5:59:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kindnet-314000/disk.qcow2
	I0327 11:19:08.924175   10020 main.go:141] libmachine: STDOUT: 
	I0327 11:19:08.924189   10020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:08.924203   10020 client.go:171] duration metric: took 418.410625ms to LocalClient.Create
	I0327 11:19:10.926411   10020 start.go:128] duration metric: took 2.467457959s to createHost
	I0327 11:19:10.926636   10020 start.go:83] releasing machines lock for "kindnet-314000", held for 2.467841375s
	W0327 11:19:10.926965   10020 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:10.943492   10020 out.go:177] 
	W0327 11:19:10.946679   10020 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:19:10.946703   10020 out.go:239] * 
	* 
	W0327 11:19:10.948345   10020 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:19:10.956610   10020 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.77763225s)

                                                
                                                
-- stdout --
	* [enable-default-cni-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-314000" primary control-plane node in "enable-default-cni-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:19:13.370584   10137 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:19:13.370743   10137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:13.370749   10137 out.go:304] Setting ErrFile to fd 2...
	I0327 11:19:13.370751   10137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:13.370905   10137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:19:13.372275   10137 out.go:298] Setting JSON to false
	I0327 11:19:13.390942   10137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6526,"bootTime":1711557027,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:19:13.391074   10137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:19:13.395811   10137 out.go:177] * [enable-default-cni-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:19:13.400049   10137 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:19:13.400170   10137 notify.go:220] Checking for updates...
	I0327 11:19:13.403978   10137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:19:13.407951   10137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:19:13.411005   10137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:19:13.414024   10137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:19:13.417010   10137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:19:13.420345   10137 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:19:13.420411   10137 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:19:13.420464   10137 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:19:13.425019   10137 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:19:13.431995   10137 start.go:297] selected driver: qemu2
	I0327 11:19:13.432005   10137 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:19:13.432011   10137 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:19:13.434661   10137 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:19:13.438984   10137 out.go:177] * Automatically selected the socket_vmnet network
	E0327 11:19:13.443107   10137 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0327 11:19:13.443126   10137 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:19:13.443168   10137 cni.go:84] Creating CNI manager for "bridge"
	I0327 11:19:13.443172   10137 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:19:13.443215   10137 start.go:340] cluster config:
	{Name:enable-default-cni-314000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:19:13.447975   10137 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:19:13.452029   10137 out.go:177] * Starting "enable-default-cni-314000" primary control-plane node in "enable-default-cni-314000" cluster
	I0327 11:19:13.458930   10137 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:19:13.458967   10137 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:19:13.458979   10137 cache.go:56] Caching tarball of preloaded images
	I0327 11:19:13.459068   10137 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:19:13.459075   10137 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:19:13.459152   10137 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/enable-default-cni-314000/config.json ...
	I0327 11:19:13.459164   10137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/enable-default-cni-314000/config.json: {Name:mk9a5a05d1b107664540dad5c8b50ee37b9290f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:19:13.459511   10137 start.go:360] acquireMachinesLock for enable-default-cni-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:13.459550   10137 start.go:364] duration metric: took 26.417µs to acquireMachinesLock for "enable-default-cni-314000"
	I0327 11:19:13.459567   10137 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:enable-default-cni-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:13.459619   10137 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:13.467021   10137 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:13.482635   10137 start.go:159] libmachine.API.Create for "enable-default-cni-314000" (driver="qemu2")
	I0327 11:19:13.482667   10137 client.go:168] LocalClient.Create starting
	I0327 11:19:13.482742   10137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:13.482771   10137 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:13.482781   10137 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:13.482822   10137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:13.482846   10137 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:13.482853   10137 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:13.483235   10137 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:13.636504   10137 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:13.712224   10137 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:13.712236   10137 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:13.712422   10137 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2
	I0327 11:19:13.724991   10137 main.go:141] libmachine: STDOUT: 
	I0327 11:19:13.725016   10137 main.go:141] libmachine: STDERR: 
	I0327 11:19:13.725080   10137 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2 +20000M
	I0327 11:19:13.735890   10137 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:13.735908   10137 main.go:141] libmachine: STDERR: 
	I0327 11:19:13.735939   10137 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2
	I0327 11:19:13.735946   10137 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:13.735980   10137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:fa:a1:25:bc:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2
	I0327 11:19:13.737725   10137 main.go:141] libmachine: STDOUT: 
	I0327 11:19:13.737743   10137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:13.737768   10137 client.go:171] duration metric: took 255.096333ms to LocalClient.Create
	I0327 11:19:15.737918   10137 start.go:128] duration metric: took 2.278301833s to createHost
	I0327 11:19:15.737951   10137 start.go:83] releasing machines lock for "enable-default-cni-314000", held for 2.278410208s
	W0327 11:19:15.738012   10137 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:15.751894   10137 out.go:177] * Deleting "enable-default-cni-314000" in qemu2 ...
	W0327 11:19:15.769524   10137 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:15.769533   10137 start.go:728] Will try again in 5 seconds ...
	I0327 11:19:20.771594   10137 start.go:360] acquireMachinesLock for enable-default-cni-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:20.771738   10137 start.go:364] duration metric: took 104.125µs to acquireMachinesLock for "enable-default-cni-314000"
	I0327 11:19:20.771782   10137 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:enable-default-cni-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:20.771850   10137 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:20.780049   10137 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:20.799537   10137 start.go:159] libmachine.API.Create for "enable-default-cni-314000" (driver="qemu2")
	I0327 11:19:20.799564   10137 client.go:168] LocalClient.Create starting
	I0327 11:19:20.799630   10137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:20.799667   10137 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:20.799676   10137 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:20.799714   10137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:20.799743   10137 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:20.799751   10137 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:20.800029   10137 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:20.953970   10137 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:21.045884   10137 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:21.045895   10137 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:21.046055   10137 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2
	I0327 11:19:21.058884   10137 main.go:141] libmachine: STDOUT: 
	I0327 11:19:21.058906   10137 main.go:141] libmachine: STDERR: 
	I0327 11:19:21.058963   10137 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2 +20000M
	I0327 11:19:21.069837   10137 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:21.069853   10137 main.go:141] libmachine: STDERR: 
	I0327 11:19:21.069864   10137 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2
	I0327 11:19:21.069874   10137 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:21.069923   10137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:99:4e:70:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/enable-default-cni-314000/disk.qcow2
	I0327 11:19:21.071704   10137 main.go:141] libmachine: STDOUT: 
	I0327 11:19:21.071719   10137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:21.071731   10137 client.go:171] duration metric: took 272.165542ms to LocalClient.Create
	I0327 11:19:23.073953   10137 start.go:128] duration metric: took 2.302080709s to createHost
	I0327 11:19:23.074029   10137 start.go:83] releasing machines lock for "enable-default-cni-314000", held for 2.30228375s
	W0327 11:19:23.074390   10137 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:23.086022   10137 out.go:177] 
	W0327 11:19:23.090161   10137 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:19:23.090235   10137 out.go:239] * 
	* 
	W0327 11:19:23.092828   10137 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:19:23.102143   10137 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.8950575s)

                                                
                                                
-- stdout --
	* [bridge-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-314000" primary control-plane node in "bridge-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:19:25.413044   10253 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:19:25.413196   10253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:25.413199   10253 out.go:304] Setting ErrFile to fd 2...
	I0327 11:19:25.413201   10253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:25.413322   10253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:19:25.414470   10253 out.go:298] Setting JSON to false
	I0327 11:19:25.430858   10253 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6538,"bootTime":1711557027,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:19:25.430929   10253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:19:25.436441   10253 out.go:177] * [bridge-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:19:25.444467   10253 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:19:25.447453   10253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:19:25.444529   10253 notify.go:220] Checking for updates...
	I0327 11:19:25.454438   10253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:19:25.458333   10253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:19:25.461429   10253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:19:25.464447   10253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:19:25.467738   10253 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:19:25.467802   10253 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:19:25.467845   10253 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:19:25.471422   10253 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:19:25.478349   10253 start.go:297] selected driver: qemu2
	I0327 11:19:25.478355   10253 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:19:25.478360   10253 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:19:25.480506   10253 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:19:25.484323   10253 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:19:25.487450   10253 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:19:25.487484   10253 cni.go:84] Creating CNI manager for "bridge"
	I0327 11:19:25.487488   10253 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:19:25.487517   10253 start.go:340] cluster config:
	{Name:bridge-314000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:19:25.491771   10253 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:19:25.499435   10253 out.go:177] * Starting "bridge-314000" primary control-plane node in "bridge-314000" cluster
	I0327 11:19:25.503335   10253 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:19:25.503350   10253 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:19:25.503359   10253 cache.go:56] Caching tarball of preloaded images
	I0327 11:19:25.503420   10253 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:19:25.503425   10253 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:19:25.503511   10253 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/bridge-314000/config.json ...
	I0327 11:19:25.503527   10253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/bridge-314000/config.json: {Name:mk30908aead13f20b7748b4267d1bea3e0c6502d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:19:25.503730   10253 start.go:360] acquireMachinesLock for bridge-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:25.503759   10253 start.go:364] duration metric: took 23.542µs to acquireMachinesLock for "bridge-314000"
	I0327 11:19:25.503771   10253 start.go:93] Provisioning new machine with config: &{Name:bridge-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge
-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:25.503801   10253 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:25.507389   10253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:25.522241   10253 start.go:159] libmachine.API.Create for "bridge-314000" (driver="qemu2")
	I0327 11:19:25.522266   10253 client.go:168] LocalClient.Create starting
	I0327 11:19:25.522321   10253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:25.522351   10253 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:25.522362   10253 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:25.522406   10253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:25.522427   10253 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:25.522434   10253 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:25.522761   10253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:25.673646   10253 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:25.718826   10253 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:25.718839   10253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:25.719014   10253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2
	I0327 11:19:25.731325   10253 main.go:141] libmachine: STDOUT: 
	I0327 11:19:25.731343   10253 main.go:141] libmachine: STDERR: 
	I0327 11:19:25.731391   10253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2 +20000M
	I0327 11:19:25.742333   10253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:25.742348   10253 main.go:141] libmachine: STDERR: 
	I0327 11:19:25.742363   10253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2
	I0327 11:19:25.742367   10253 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:25.742400   10253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:9f:21:32:83:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2
	I0327 11:19:25.744122   10253 main.go:141] libmachine: STDOUT: 
	I0327 11:19:25.744135   10253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:25.744152   10253 client.go:171] duration metric: took 221.881708ms to LocalClient.Create
	I0327 11:19:27.745447   10253 start.go:128] duration metric: took 2.241635583s to createHost
	I0327 11:19:27.745484   10253 start.go:83] releasing machines lock for "bridge-314000", held for 2.241733958s
	W0327 11:19:27.745553   10253 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:27.758641   10253 out.go:177] * Deleting "bridge-314000" in qemu2 ...
	W0327 11:19:27.779329   10253 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:27.779342   10253 start.go:728] Will try again in 5 seconds ...
	I0327 11:19:32.781446   10253 start.go:360] acquireMachinesLock for bridge-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:32.781718   10253 start.go:364] duration metric: took 202.125µs to acquireMachinesLock for "bridge-314000"
	I0327 11:19:32.781748   10253 start.go:93] Provisioning new machine with config: &{Name:bridge-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge
-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:32.781887   10253 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:32.791217   10253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:32.815930   10253 start.go:159] libmachine.API.Create for "bridge-314000" (driver="qemu2")
	I0327 11:19:32.815969   10253 client.go:168] LocalClient.Create starting
	I0327 11:19:32.816063   10253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:32.816103   10253 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:32.816114   10253 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:32.816157   10253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:32.816185   10253 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:32.816190   10253 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:32.816531   10253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:32.970834   10253 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:33.210684   10253 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:33.210698   10253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:33.210947   10253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2
	I0327 11:19:33.224251   10253 main.go:141] libmachine: STDOUT: 
	I0327 11:19:33.224272   10253 main.go:141] libmachine: STDERR: 
	I0327 11:19:33.224361   10253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2 +20000M
	I0327 11:19:33.236599   10253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:33.236623   10253 main.go:141] libmachine: STDERR: 
	I0327 11:19:33.236639   10253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2
	I0327 11:19:33.236644   10253 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:33.236676   10253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:71:f9:8b:03:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/bridge-314000/disk.qcow2
	I0327 11:19:33.238658   10253 main.go:141] libmachine: STDOUT: 
	I0327 11:19:33.238674   10253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:33.238688   10253 client.go:171] duration metric: took 422.716209ms to LocalClient.Create
	I0327 11:19:35.240906   10253 start.go:128] duration metric: took 2.459006125s to createHost
	I0327 11:19:35.240974   10253 start.go:83] releasing machines lock for "bridge-314000", held for 2.459258333s
	W0327 11:19:35.241230   10253 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:35.252001   10253 out.go:177] 
	W0327 11:19:35.255916   10253 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:19:35.255936   10253 out.go:239] * 
	* 
	W0327 11:19:35.257592   10253 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:19:35.268934   10253 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.85198925s)

                                                
                                                
-- stdout --
	* [kubenet-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-314000" primary control-plane node in "kubenet-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:19:37.585841   10370 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:19:37.586003   10370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:37.586006   10370 out.go:304] Setting ErrFile to fd 2...
	I0327 11:19:37.586008   10370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:37.586130   10370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:19:37.587278   10370 out.go:298] Setting JSON to false
	I0327 11:19:37.603648   10370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6550,"bootTime":1711557027,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:19:37.603719   10370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:19:37.609397   10370 out.go:177] * [kubenet-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:19:37.616314   10370 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:19:37.618159   10370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:19:37.616355   10370 notify.go:220] Checking for updates...
	I0327 11:19:37.622285   10370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:19:37.625286   10370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:19:37.628153   10370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:19:37.631274   10370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:19:37.634719   10370 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:19:37.634784   10370 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:19:37.634831   10370 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:19:37.638137   10370 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:19:37.645237   10370 start.go:297] selected driver: qemu2
	I0327 11:19:37.645243   10370 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:19:37.645248   10370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:19:37.647495   10370 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:19:37.648725   10370 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:19:37.652332   10370 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:19:37.652370   10370 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0327 11:19:37.652394   10370 start.go:340] cluster config:
	{Name:kubenet-314000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kubenet-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:19:37.656530   10370 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:19:37.664245   10370 out.go:177] * Starting "kubenet-314000" primary control-plane node in "kubenet-314000" cluster
	I0327 11:19:37.668294   10370 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:19:37.668310   10370 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:19:37.668320   10370 cache.go:56] Caching tarball of preloaded images
	I0327 11:19:37.668378   10370 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:19:37.668384   10370 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:19:37.668457   10370 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/kubenet-314000/config.json ...
	I0327 11:19:37.668468   10370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/kubenet-314000/config.json: {Name:mke973df9d257d4dd569d0ec92f5dd1faddcd149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:19:37.668667   10370 start.go:360] acquireMachinesLock for kubenet-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:37.668694   10370 start.go:364] duration metric: took 22.5µs to acquireMachinesLock for "kubenet-314000"
	I0327 11:19:37.668707   10370 start.go:93] Provisioning new machine with config: &{Name:kubenet-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kuben
et-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:37.668735   10370 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:37.676280   10370 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:37.690827   10370 start.go:159] libmachine.API.Create for "kubenet-314000" (driver="qemu2")
	I0327 11:19:37.690851   10370 client.go:168] LocalClient.Create starting
	I0327 11:19:37.690912   10370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:37.690940   10370 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:37.690961   10370 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:37.691006   10370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:37.691031   10370 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:37.691040   10370 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:37.691381   10370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:37.844676   10370 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:37.979529   10370 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:37.979537   10370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:37.979709   10370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2
	I0327 11:19:37.992218   10370 main.go:141] libmachine: STDOUT: 
	I0327 11:19:37.992235   10370 main.go:141] libmachine: STDERR: 
	I0327 11:19:37.992299   10370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2 +20000M
	I0327 11:19:38.003608   10370 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:38.003631   10370 main.go:141] libmachine: STDERR: 
	I0327 11:19:38.003652   10370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2
	I0327 11:19:38.003668   10370 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:38.003700   10370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:49:67:55:28:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2
	I0327 11:19:38.005533   10370 main.go:141] libmachine: STDOUT: 
	I0327 11:19:38.005549   10370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:38.005568   10370 client.go:171] duration metric: took 314.716ms to LocalClient.Create
	I0327 11:19:40.007761   10370 start.go:128] duration metric: took 2.339014291s to createHost
	I0327 11:19:40.007869   10370 start.go:83] releasing machines lock for "kubenet-314000", held for 2.339181875s
	W0327 11:19:40.007977   10370 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:40.025090   10370 out.go:177] * Deleting "kubenet-314000" in qemu2 ...
	W0327 11:19:40.054158   10370 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:40.054189   10370 start.go:728] Will try again in 5 seconds ...
	I0327 11:19:45.056220   10370 start.go:360] acquireMachinesLock for kubenet-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:45.056319   10370 start.go:364] duration metric: took 83.334µs to acquireMachinesLock for "kubenet-314000"
	I0327 11:19:45.056332   10370 start.go:93] Provisioning new machine with config: &{Name:kubenet-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kuben
et-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:45.056373   10370 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:45.065569   10370 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:45.081631   10370 start.go:159] libmachine.API.Create for "kubenet-314000" (driver="qemu2")
	I0327 11:19:45.081664   10370 client.go:168] LocalClient.Create starting
	I0327 11:19:45.081742   10370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:45.081777   10370 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:45.081784   10370 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:45.081820   10370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:45.081841   10370 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:45.081848   10370 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:45.082184   10370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:45.234933   10370 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:45.341927   10370 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:45.341937   10370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:45.342144   10370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2
	I0327 11:19:45.354936   10370 main.go:141] libmachine: STDOUT: 
	I0327 11:19:45.354957   10370 main.go:141] libmachine: STDERR: 
	I0327 11:19:45.355087   10370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2 +20000M
	I0327 11:19:45.366049   10370 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:45.366104   10370 main.go:141] libmachine: STDERR: 
	I0327 11:19:45.366125   10370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2
	I0327 11:19:45.366130   10370 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:45.366173   10370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:5a:40:5f:b8:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/kubenet-314000/disk.qcow2
	I0327 11:19:45.368062   10370 main.go:141] libmachine: STDOUT: 
	I0327 11:19:45.368077   10370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:45.368092   10370 client.go:171] duration metric: took 286.425709ms to LocalClient.Create
	I0327 11:19:47.370209   10370 start.go:128] duration metric: took 2.313837166s to createHost
	I0327 11:19:47.370263   10370 start.go:83] releasing machines lock for "kubenet-314000", held for 2.313952791s
	W0327 11:19:47.370559   10370 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:47.382106   10370 out.go:177] 
	W0327 11:19:47.385068   10370 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:19:47.385078   10370 out.go:239] * 
	* 
	W0327 11:19:47.386043   10370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:19:47.398997   10370 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.86305775s)

                                                
                                                
-- stdout --
	* [custom-flannel-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-314000" primary control-plane node in "custom-flannel-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:19:49.701251   10482 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:19:49.701392   10482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:49.701394   10482 out.go:304] Setting ErrFile to fd 2...
	I0327 11:19:49.701397   10482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:19:49.701519   10482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:19:49.702614   10482 out.go:298] Setting JSON to false
	I0327 11:19:49.718720   10482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6562,"bootTime":1711557027,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:19:49.718784   10482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:19:49.724543   10482 out.go:177] * [custom-flannel-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:19:49.737647   10482 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:19:49.732740   10482 notify.go:220] Checking for updates...
	I0327 11:19:49.745693   10482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:19:49.749655   10482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:19:49.753691   10482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:19:49.755098   10482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:19:49.758673   10482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:19:49.763059   10482 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:19:49.763118   10482 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:19:49.763161   10482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:19:49.766553   10482 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:19:49.773617   10482 start.go:297] selected driver: qemu2
	I0327 11:19:49.773623   10482 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:19:49.773627   10482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:19:49.775829   10482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:19:49.778544   10482 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:19:49.781744   10482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:19:49.781799   10482 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0327 11:19:49.781807   10482 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0327 11:19:49.781852   10482 start.go:340] cluster config:
	{Name:custom-flannel-314000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socke
t_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:19:49.786187   10482 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:19:49.794633   10482 out.go:177] * Starting "custom-flannel-314000" primary control-plane node in "custom-flannel-314000" cluster
	I0327 11:19:49.798590   10482 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:19:49.798607   10482 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:19:49.798616   10482 cache.go:56] Caching tarball of preloaded images
	I0327 11:19:49.798669   10482 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:19:49.798674   10482 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:19:49.798721   10482 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/custom-flannel-314000/config.json ...
	I0327 11:19:49.798731   10482 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/custom-flannel-314000/config.json: {Name:mkf2070887346f376896c4ec12e82d429bdce647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:19:49.798925   10482 start.go:360] acquireMachinesLock for custom-flannel-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:49.798955   10482 start.go:364] duration metric: took 24µs to acquireMachinesLock for "custom-flannel-314000"
	I0327 11:19:49.798971   10482 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterNam
e:custom-flannel-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:49.799008   10482 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:49.807653   10482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:49.822281   10482 start.go:159] libmachine.API.Create for "custom-flannel-314000" (driver="qemu2")
	I0327 11:19:49.822309   10482 client.go:168] LocalClient.Create starting
	I0327 11:19:49.822372   10482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:49.822402   10482 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:49.822417   10482 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:49.822467   10482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:49.822491   10482 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:49.822497   10482 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:49.822835   10482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:49.976140   10482 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:50.173931   10482 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:50.173944   10482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:50.174154   10482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2
	I0327 11:19:50.187108   10482 main.go:141] libmachine: STDOUT: 
	I0327 11:19:50.187130   10482 main.go:141] libmachine: STDERR: 
	I0327 11:19:50.187192   10482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2 +20000M
	I0327 11:19:50.198251   10482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:50.198267   10482 main.go:141] libmachine: STDERR: 
	I0327 11:19:50.198288   10482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2
	I0327 11:19:50.198293   10482 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:50.198321   10482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a7:a4:c3:57:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2
	I0327 11:19:50.200146   10482 main.go:141] libmachine: STDOUT: 
	I0327 11:19:50.200164   10482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:50.200184   10482 client.go:171] duration metric: took 377.871375ms to LocalClient.Create
	I0327 11:19:52.202482   10482 start.go:128] duration metric: took 2.4034625s to createHost
	I0327 11:19:52.202637   10482 start.go:83] releasing machines lock for "custom-flannel-314000", held for 2.403689167s
	W0327 11:19:52.202702   10482 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:52.208878   10482 out.go:177] * Deleting "custom-flannel-314000" in qemu2 ...
	W0327 11:19:52.242459   10482 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:52.242484   10482 start.go:728] Will try again in 5 seconds ...
	I0327 11:19:57.242635   10482 start.go:360] acquireMachinesLock for custom-flannel-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:19:57.242833   10482 start.go:364] duration metric: took 155.167µs to acquireMachinesLock for "custom-flannel-314000"
	I0327 11:19:57.242876   10482 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterNam
e:custom-flannel-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:19:57.242942   10482 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:19:57.252092   10482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:19:57.276150   10482 start.go:159] libmachine.API.Create for "custom-flannel-314000" (driver="qemu2")
	I0327 11:19:57.276192   10482 client.go:168] LocalClient.Create starting
	I0327 11:19:57.276267   10482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:19:57.276311   10482 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:57.276320   10482 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:57.276362   10482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:19:57.276391   10482 main.go:141] libmachine: Decoding PEM data...
	I0327 11:19:57.276399   10482 main.go:141] libmachine: Parsing certificate...
	I0327 11:19:57.276808   10482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:19:57.432557   10482 main.go:141] libmachine: Creating SSH key...
	I0327 11:19:57.464838   10482 main.go:141] libmachine: Creating Disk image...
	I0327 11:19:57.464853   10482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:19:57.465043   10482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2
	I0327 11:19:57.477571   10482 main.go:141] libmachine: STDOUT: 
	I0327 11:19:57.477591   10482 main.go:141] libmachine: STDERR: 
	I0327 11:19:57.477670   10482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2 +20000M
	I0327 11:19:57.488362   10482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:19:57.488380   10482 main.go:141] libmachine: STDERR: 
	I0327 11:19:57.488399   10482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2
	I0327 11:19:57.488406   10482 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:19:57.488439   10482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:65:67:2d:04:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/custom-flannel-314000/disk.qcow2
	I0327 11:19:57.490166   10482 main.go:141] libmachine: STDOUT: 
	I0327 11:19:57.490185   10482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:19:57.490197   10482 client.go:171] duration metric: took 214.001916ms to LocalClient.Create
	I0327 11:19:59.491313   10482 start.go:128] duration metric: took 2.248368292s to createHost
	I0327 11:19:59.491359   10482 start.go:83] releasing machines lock for "custom-flannel-314000", held for 2.248532875s
	W0327 11:19:59.491525   10482 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:19:59.504966   10482 out.go:177] 
	W0327 11:19:59.508996   10482 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:19:59.509008   10482 out.go:239] * 
	* 
	W0327 11:19:59.510057   10482 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:19:59.526003   10482 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.804201541s)

                                                
                                                
-- stdout --
	* [calico-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-314000" primary control-plane node in "calico-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:20:02.037398   10602 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:20:02.037547   10602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:02.037551   10602 out.go:304] Setting ErrFile to fd 2...
	I0327 11:20:02.037553   10602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:02.037672   10602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:20:02.038742   10602 out.go:298] Setting JSON to false
	I0327 11:20:02.055118   10602 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6575,"bootTime":1711557027,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:20:02.055183   10602 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:20:02.061711   10602 out.go:177] * [calico-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:20:02.069694   10602 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:20:02.069775   10602 notify.go:220] Checking for updates...
	I0327 11:20:02.074728   10602 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:20:02.078622   10602 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:20:02.081712   10602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:20:02.084747   10602 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:20:02.087752   10602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:20:02.091124   10602 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:20:02.091184   10602 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:20:02.091232   10602 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:20:02.095682   10602 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:20:02.102683   10602 start.go:297] selected driver: qemu2
	I0327 11:20:02.102693   10602 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:20:02.102700   10602 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:20:02.105022   10602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:20:02.107711   10602 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:20:02.110741   10602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:20:02.110779   10602 cni.go:84] Creating CNI manager for "calico"
	I0327 11:20:02.110790   10602 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0327 11:20:02.110830   10602 start.go:340] cluster config:
	{Name:calico-314000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:02.114868   10602 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:02.122745   10602 out.go:177] * Starting "calico-314000" primary control-plane node in "calico-314000" cluster
	I0327 11:20:02.126674   10602 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:20:02.126689   10602 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:20:02.126693   10602 cache.go:56] Caching tarball of preloaded images
	I0327 11:20:02.126750   10602 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:20:02.126755   10602 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:20:02.126806   10602 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/calico-314000/config.json ...
	I0327 11:20:02.126815   10602 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/calico-314000/config.json: {Name:mk7d859980f4a0c4241d1adf0f7c62bc1dc57d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:20:02.127066   10602 start.go:360] acquireMachinesLock for calico-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:02.127093   10602 start.go:364] duration metric: took 22.334µs to acquireMachinesLock for "calico-314000"
	I0327 11:20:02.127105   10602 start.go:93] Provisioning new machine with config: &{Name:calico-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico
-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:02.127150   10602 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:02.134640   10602 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:20:02.149246   10602 start.go:159] libmachine.API.Create for "calico-314000" (driver="qemu2")
	I0327 11:20:02.149275   10602 client.go:168] LocalClient.Create starting
	I0327 11:20:02.149336   10602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:02.149368   10602 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:02.149376   10602 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:02.149420   10602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:02.149441   10602 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:02.149448   10602 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:02.149809   10602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:02.303175   10602 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:02.444598   10602 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:02.444608   10602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:02.444766   10602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2
	I0327 11:20:02.457742   10602 main.go:141] libmachine: STDOUT: 
	I0327 11:20:02.457761   10602 main.go:141] libmachine: STDERR: 
	I0327 11:20:02.457817   10602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2 +20000M
	I0327 11:20:02.468693   10602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:02.468708   10602 main.go:141] libmachine: STDERR: 
	I0327 11:20:02.468726   10602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2
	I0327 11:20:02.468730   10602 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:02.468781   10602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:e8:09:1c:d4:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2
	I0327 11:20:02.470534   10602 main.go:141] libmachine: STDOUT: 
	I0327 11:20:02.470549   10602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:02.470568   10602 client.go:171] duration metric: took 321.2905ms to LocalClient.Create
	I0327 11:20:04.472206   10602 start.go:128] duration metric: took 2.345055042s to createHost
	I0327 11:20:04.472256   10602 start.go:83] releasing machines lock for "calico-314000", held for 2.345173541s
	W0327 11:20:04.472300   10602 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:04.483974   10602 out.go:177] * Deleting "calico-314000" in qemu2 ...
	W0327 11:20:04.502792   10602 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:04.502808   10602 start.go:728] Will try again in 5 seconds ...
	I0327 11:20:09.504896   10602 start.go:360] acquireMachinesLock for calico-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:09.505016   10602 start.go:364] duration metric: took 86.666µs to acquireMachinesLock for "calico-314000"
	I0327 11:20:09.505028   10602 start.go:93] Provisioning new machine with config: &{Name:calico-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico
-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:09.505073   10602 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:09.513290   10602 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:20:09.529298   10602 start.go:159] libmachine.API.Create for "calico-314000" (driver="qemu2")
	I0327 11:20:09.529325   10602 client.go:168] LocalClient.Create starting
	I0327 11:20:09.529387   10602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:09.529428   10602 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:09.529436   10602 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:09.529470   10602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:09.529491   10602 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:09.529496   10602 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:09.529774   10602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:09.687444   10602 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:09.737486   10602 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:09.737495   10602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:09.737679   10602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2
	I0327 11:20:09.749628   10602 main.go:141] libmachine: STDOUT: 
	I0327 11:20:09.749650   10602 main.go:141] libmachine: STDERR: 
	I0327 11:20:09.749701   10602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2 +20000M
	I0327 11:20:09.760316   10602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:09.760333   10602 main.go:141] libmachine: STDERR: 
	I0327 11:20:09.760347   10602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2
	I0327 11:20:09.760351   10602 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:09.760384   10602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:99:18:ae:8f:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/calico-314000/disk.qcow2
	I0327 11:20:09.762154   10602 main.go:141] libmachine: STDOUT: 
	I0327 11:20:09.762170   10602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:09.762182   10602 client.go:171] duration metric: took 232.854708ms to LocalClient.Create
	I0327 11:20:11.764378   10602 start.go:128] duration metric: took 2.259293708s to createHost
	I0327 11:20:11.764481   10602 start.go:83] releasing machines lock for "calico-314000", held for 2.259470458s
	W0327 11:20:11.764804   10602 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:11.781549   10602 out.go:177] 
	W0327 11:20:11.785621   10602 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:20:11.785650   10602 out.go:239] * 
	* 
	W0327 11:20:11.787979   10602 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:20:11.797490   10602 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-314000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.837873458s)

                                                
                                                
-- stdout --
	* [false-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-314000" primary control-plane node in "false-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:20:14.331275   10729 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:20:14.331397   10729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:14.331401   10729 out.go:304] Setting ErrFile to fd 2...
	I0327 11:20:14.331403   10729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:14.331519   10729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:20:14.332640   10729 out.go:298] Setting JSON to false
	I0327 11:20:14.348781   10729 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6587,"bootTime":1711557027,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:20:14.348843   10729 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:20:14.354067   10729 out.go:177] * [false-314000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:20:14.361893   10729 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:20:14.361936   10729 notify.go:220] Checking for updates...
	I0327 11:20:14.369024   10729 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:20:14.372000   10729 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:20:14.374983   10729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:20:14.378009   10729 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:20:14.379476   10729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:20:14.383297   10729 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:20:14.383371   10729 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:20:14.383420   10729 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:20:14.387957   10729 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:20:14.393006   10729 start.go:297] selected driver: qemu2
	I0327 11:20:14.393018   10729 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:20:14.393023   10729 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:20:14.395317   10729 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:20:14.399955   10729 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:20:14.401428   10729 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:20:14.401483   10729 cni.go:84] Creating CNI manager for "false"
	I0327 11:20:14.401509   10729 start.go:340] cluster config:
	{Name:false-314000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: N
etworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMne
tPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:14.406067   10729 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:14.415953   10729 out.go:177] * Starting "false-314000" primary control-plane node in "false-314000" cluster
	I0327 11:20:14.419950   10729 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:20:14.419975   10729 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:20:14.419983   10729 cache.go:56] Caching tarball of preloaded images
	I0327 11:20:14.420059   10729 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:20:14.420066   10729 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:20:14.420129   10729 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/false-314000/config.json ...
	I0327 11:20:14.420141   10729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/false-314000/config.json: {Name:mkb71d5f7d89ef4d552125ba4758aebae1c51df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:20:14.420400   10729 start.go:360] acquireMachinesLock for false-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:14.420438   10729 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "false-314000"
	I0327 11:20:14.420453   10729 start.go:93] Provisioning new machine with config: &{Name:false-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-3
14000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:14.420485   10729 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:14.428984   10729 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:20:14.444603   10729 start.go:159] libmachine.API.Create for "false-314000" (driver="qemu2")
	I0327 11:20:14.444635   10729 client.go:168] LocalClient.Create starting
	I0327 11:20:14.444707   10729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:14.444738   10729 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:14.444749   10729 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:14.444791   10729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:14.444813   10729 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:14.444821   10729 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:14.445176   10729 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:14.600123   10729 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:14.660060   10729 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:14.660069   10729 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:14.660260   10729 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2
	I0327 11:20:14.672692   10729 main.go:141] libmachine: STDOUT: 
	I0327 11:20:14.672714   10729 main.go:141] libmachine: STDERR: 
	I0327 11:20:14.672764   10729 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2 +20000M
	I0327 11:20:14.683510   10729 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:14.683527   10729 main.go:141] libmachine: STDERR: 
	I0327 11:20:14.683542   10729 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2
	I0327 11:20:14.683548   10729 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:14.683591   10729 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:e4:05:f2:1a:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2
	I0327 11:20:14.685362   10729 main.go:141] libmachine: STDOUT: 
	I0327 11:20:14.685382   10729 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:14.685404   10729 client.go:171] duration metric: took 240.765417ms to LocalClient.Create
	I0327 11:20:16.687662   10729 start.go:128] duration metric: took 2.26716325s to createHost
	I0327 11:20:16.687741   10729 start.go:83] releasing machines lock for "false-314000", held for 2.267310083s
	W0327 11:20:16.687803   10729 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:16.700383   10729 out.go:177] * Deleting "false-314000" in qemu2 ...
	W0327 11:20:16.723910   10729 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:16.723932   10729 start.go:728] Will try again in 5 seconds ...
	I0327 11:20:21.724618   10729 start.go:360] acquireMachinesLock for false-314000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:21.725093   10729 start.go:364] duration metric: took 316.083µs to acquireMachinesLock for "false-314000"
	I0327 11:20:21.725217   10729 start.go:93] Provisioning new machine with config: &{Name:false-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-3
14000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:21.725421   10729 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:21.734062   10729 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 11:20:21.778460   10729 start.go:159] libmachine.API.Create for "false-314000" (driver="qemu2")
	I0327 11:20:21.778517   10729 client.go:168] LocalClient.Create starting
	I0327 11:20:21.778626   10729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:21.778681   10729 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:21.778696   10729 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:21.778768   10729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:21.778817   10729 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:21.778830   10729 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:21.779378   10729 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:21.949098   10729 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:22.062391   10729 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:22.062397   10729 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:22.062573   10729 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2
	I0327 11:20:22.075075   10729 main.go:141] libmachine: STDOUT: 
	I0327 11:20:22.075096   10729 main.go:141] libmachine: STDERR: 
	I0327 11:20:22.075146   10729 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2 +20000M
	I0327 11:20:22.086265   10729 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:22.086295   10729 main.go:141] libmachine: STDERR: 
	I0327 11:20:22.086307   10729 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2
	I0327 11:20:22.086311   10729 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:22.086764   10729 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:39:3e:9c:ff:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/false-314000/disk.qcow2
	I0327 11:20:22.089427   10729 main.go:141] libmachine: STDOUT: 
	I0327 11:20:22.089445   10729 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:22.089456   10729 client.go:171] duration metric: took 310.933875ms to LocalClient.Create
	I0327 11:20:24.091652   10729 start.go:128] duration metric: took 2.366190916s to createHost
	I0327 11:20:24.091735   10729 start.go:83] releasing machines lock for "false-314000", held for 2.366637667s
	W0327 11:20:24.092112   10729 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:24.106866   10729 out.go:177] 
	W0327 11:20:24.110925   10729 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:20:24.110989   10729 out.go:239] * 
	* 
	W0327 11:20:24.113510   10729 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:20:24.124814   10729 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-316000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-316000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.215443417s)

                                                
                                                
-- stdout --
	* [old-k8s-version-316000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-316000" primary control-plane node in "old-k8s-version-316000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-316000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:20:26.547619   10845 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:20:26.547765   10845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:26.547769   10845 out.go:304] Setting ErrFile to fd 2...
	I0327 11:20:26.547771   10845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:26.547898   10845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:20:26.549078   10845 out.go:298] Setting JSON to false
	I0327 11:20:26.565901   10845 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6599,"bootTime":1711557027,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:20:26.565975   10845 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:20:26.573865   10845 out.go:177] * [old-k8s-version-316000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:20:26.581102   10845 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:20:26.585097   10845 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:20:26.581174   10845 notify.go:220] Checking for updates...
	I0327 11:20:26.589078   10845 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:20:26.594009   10845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:20:26.602103   10845 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:20:26.605025   10845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:20:26.608490   10845 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:20:26.608559   10845 config.go:182] Loaded profile config "stopped-upgrade-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 11:20:26.608601   10845 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:20:26.613033   10845 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:20:26.620024   10845 start.go:297] selected driver: qemu2
	I0327 11:20:26.620030   10845 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:20:26.620036   10845 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:20:26.622471   10845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:20:26.627076   10845 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:20:26.630235   10845 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:20:26.630270   10845 cni.go:84] Creating CNI manager for ""
	I0327 11:20:26.630277   10845 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 11:20:26.630308   10845 start.go:340] cluster config:
	{Name:old-k8s-version-316000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-316000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:26.634965   10845 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:26.642856   10845 out.go:177] * Starting "old-k8s-version-316000" primary control-plane node in "old-k8s-version-316000" cluster
	I0327 11:20:26.646958   10845 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 11:20:26.646974   10845 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 11:20:26.646981   10845 cache.go:56] Caching tarball of preloaded images
	I0327 11:20:26.647030   10845 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:20:26.647035   10845 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 11:20:26.647093   10845 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/old-k8s-version-316000/config.json ...
	I0327 11:20:26.647103   10845 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/old-k8s-version-316000/config.json: {Name:mk3b95ecb2de08c308cc667103ec7132e2828f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:20:26.647335   10845 start.go:360] acquireMachinesLock for old-k8s-version-316000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:26.647371   10845 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "old-k8s-version-316000"
	I0327 11:20:26.647387   10845 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-316000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterNam
e:old-k8s-version-316000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:26.647417   10845 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:26.655111   10845 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:20:26.670023   10845 start.go:159] libmachine.API.Create for "old-k8s-version-316000" (driver="qemu2")
	I0327 11:20:26.670047   10845 client.go:168] LocalClient.Create starting
	I0327 11:20:26.670115   10845 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:26.670144   10845 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:26.670155   10845 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:26.670200   10845 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:26.670220   10845 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:26.670228   10845 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:26.670585   10845 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:26.821984   10845 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:27.020906   10845 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:27.020915   10845 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:27.021362   10845 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2
	I0327 11:20:27.034243   10845 main.go:141] libmachine: STDOUT: 
	I0327 11:20:27.034271   10845 main.go:141] libmachine: STDERR: 
	I0327 11:20:27.034329   10845 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2 +20000M
	I0327 11:20:27.045353   10845 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:27.045369   10845 main.go:141] libmachine: STDERR: 
	I0327 11:20:27.045382   10845 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2
	I0327 11:20:27.045387   10845 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:27.045424   10845 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:69:fe:f0:e6:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2
	I0327 11:20:27.047323   10845 main.go:141] libmachine: STDOUT: 
	I0327 11:20:27.047338   10845 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:27.047356   10845 client.go:171] duration metric: took 377.306209ms to LocalClient.Create
	I0327 11:20:29.049561   10845 start.go:128] duration metric: took 2.402128292s to createHost
	I0327 11:20:29.049640   10845 start.go:83] releasing machines lock for "old-k8s-version-316000", held for 2.402273709s
	W0327 11:20:29.049774   10845 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:29.060899   10845 out.go:177] * Deleting "old-k8s-version-316000" in qemu2 ...
	W0327 11:20:29.093810   10845 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:29.093836   10845 start.go:728] Will try again in 5 seconds ...
	I0327 11:20:34.096062   10845 start.go:360] acquireMachinesLock for old-k8s-version-316000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:34.096593   10845 start.go:364] duration metric: took 395µs to acquireMachinesLock for "old-k8s-version-316000"
	I0327 11:20:34.096731   10845 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-316000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterNam
e:old-k8s-version-316000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:34.096989   10845 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:34.106840   10845 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:20:34.155663   10845 start.go:159] libmachine.API.Create for "old-k8s-version-316000" (driver="qemu2")
	I0327 11:20:34.155713   10845 client.go:168] LocalClient.Create starting
	I0327 11:20:34.155823   10845 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:34.155884   10845 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:34.155905   10845 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:34.155960   10845 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:34.156002   10845 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:34.156015   10845 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:34.156544   10845 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:34.320451   10845 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:34.666623   10845 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:34.666631   10845 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:34.666819   10845 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2
	I0327 11:20:34.679669   10845 main.go:141] libmachine: STDOUT: 
	I0327 11:20:34.679697   10845 main.go:141] libmachine: STDERR: 
	I0327 11:20:34.679762   10845 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2 +20000M
	I0327 11:20:34.690695   10845 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:34.690727   10845 main.go:141] libmachine: STDERR: 
	I0327 11:20:34.690745   10845 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2
	I0327 11:20:34.690750   10845 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:34.690794   10845 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:f4:b3:04:04:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2
	I0327 11:20:34.692710   10845 main.go:141] libmachine: STDOUT: 
	I0327 11:20:34.692744   10845 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:34.692762   10845 client.go:171] duration metric: took 537.047583ms to LocalClient.Create
	I0327 11:20:36.695057   10845 start.go:128] duration metric: took 2.598029583s to createHost
	I0327 11:20:36.695136   10845 start.go:83] releasing machines lock for "old-k8s-version-316000", held for 2.598530208s
	W0327 11:20:36.695533   10845 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-316000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-316000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:36.707199   10845 out.go:177] 
	W0327 11:20:36.711217   10845 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:20:36.711247   10845 out.go:239] * 
	* 
	W0327 11:20:36.713788   10845 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:20:36.719216   10845 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-316000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (68.950667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-316000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-316000 create -f testdata/busybox.yaml: exit status 1 (30.0885ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-316000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-316000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (31.271083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (31.005917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-316000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-316000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-316000 describe deploy/metrics-server -n kube-system: exit status 1 (27.388875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-316000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-316000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (31.583375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-316000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-316000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.207579125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-316000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-316000" primary control-plane node in "old-k8s-version-316000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-316000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-316000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:20:40.196660   10899 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:20:40.196794   10899 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:40.196798   10899 out.go:304] Setting ErrFile to fd 2...
	I0327 11:20:40.196800   10899 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:40.196914   10899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:20:40.197927   10899 out.go:298] Setting JSON to false
	I0327 11:20:40.214363   10899 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6613,"bootTime":1711557027,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:20:40.214428   10899 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:20:40.219019   10899 out.go:177] * [old-k8s-version-316000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:20:40.231924   10899 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:20:40.227041   10899 notify.go:220] Checking for updates...
	I0327 11:20:40.238007   10899 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:20:40.241981   10899 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:20:40.248009   10899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:20:40.251988   10899 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:20:40.255031   10899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:20:40.258347   10899 config.go:182] Loaded profile config "old-k8s-version-316000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 11:20:40.261966   10899 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 11:20:40.264957   10899 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:20:40.268957   10899 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:20:40.276018   10899 start.go:297] selected driver: qemu2
	I0327 11:20:40.276022   10899 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-316000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:o
ld-k8s-version-316000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:40.276088   10899 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:20:40.278289   10899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:20:40.278330   10899 cni.go:84] Creating CNI manager for ""
	I0327 11:20:40.278336   10899 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 11:20:40.278361   10899 start.go:340] cluster config:
	{Name:old-k8s-version-316000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-316000 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:40.282350   10899 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:40.287939   10899 out.go:177] * Starting "old-k8s-version-316000" primary control-plane node in "old-k8s-version-316000" cluster
	I0327 11:20:40.292066   10899 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 11:20:40.292083   10899 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 11:20:40.292096   10899 cache.go:56] Caching tarball of preloaded images
	I0327 11:20:40.292150   10899 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:20:40.292155   10899 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 11:20:40.292220   10899 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/old-k8s-version-316000/config.json ...
	I0327 11:20:40.292679   10899 start.go:360] acquireMachinesLock for old-k8s-version-316000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:40.292707   10899 start.go:364] duration metric: took 22.416µs to acquireMachinesLock for "old-k8s-version-316000"
	I0327 11:20:40.292715   10899 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:20:40.292719   10899 fix.go:54] fixHost starting: 
	I0327 11:20:40.292824   10899 fix.go:112] recreateIfNeeded on old-k8s-version-316000: state=Stopped err=<nil>
	W0327 11:20:40.292837   10899 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:20:40.297010   10899 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-316000" ...
	I0327 11:20:40.304044   10899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:f4:b3:04:04:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2
	I0327 11:20:40.305993   10899 main.go:141] libmachine: STDOUT: 
	I0327 11:20:40.306013   10899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:40.306039   10899 fix.go:56] duration metric: took 13.318959ms for fixHost
	I0327 11:20:40.306044   10899 start.go:83] releasing machines lock for "old-k8s-version-316000", held for 13.332916ms
	W0327 11:20:40.306051   10899 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:20:40.306077   10899 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:40.306082   10899 start.go:728] Will try again in 5 seconds ...
	I0327 11:20:45.306297   10899 start.go:360] acquireMachinesLock for old-k8s-version-316000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:45.306776   10899 start.go:364] duration metric: took 298.541µs to acquireMachinesLock for "old-k8s-version-316000"
	I0327 11:20:45.306893   10899 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:20:45.306914   10899 fix.go:54] fixHost starting: 
	I0327 11:20:45.307632   10899 fix.go:112] recreateIfNeeded on old-k8s-version-316000: state=Stopped err=<nil>
	W0327 11:20:45.307657   10899 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:20:45.318192   10899 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-316000" ...
	I0327 11:20:45.323713   10899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:f4:b3:04:04:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/old-k8s-version-316000/disk.qcow2
	I0327 11:20:45.334184   10899 main.go:141] libmachine: STDOUT: 
	I0327 11:20:45.334253   10899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:45.334328   10899 fix.go:56] duration metric: took 27.412208ms for fixHost
	I0327 11:20:45.334346   10899 start.go:83] releasing machines lock for "old-k8s-version-316000", held for 27.543333ms
	W0327 11:20:45.334555   10899 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-316000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-316000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:45.343163   10899 out.go:177] 
	W0327 11:20:45.347497   10899 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:20:45.347526   10899 out.go:239] * 
	* 
	W0327 11:20:45.349966   10899 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:20:45.361216   10899 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-316000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (68.111667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-070000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-070000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (9.892252167s)

                                                
                                                
-- stdout --
	* [no-preload-070000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-070000" primary control-plane node in "no-preload-070000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-070000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:20:41.519186   10910 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:20:41.519331   10910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:41.519334   10910 out.go:304] Setting ErrFile to fd 2...
	I0327 11:20:41.519336   10910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:41.519468   10910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:20:41.520526   10910 out.go:298] Setting JSON to false
	I0327 11:20:41.536747   10910 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6614,"bootTime":1711557027,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:20:41.536801   10910 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:20:41.541622   10910 out.go:177] * [no-preload-070000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:20:41.549781   10910 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:20:41.552737   10910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:20:41.549803   10910 notify.go:220] Checking for updates...
	I0327 11:20:41.558612   10910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:20:41.561736   10910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:20:41.564779   10910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:20:41.566176   10910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:20:41.570111   10910 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:20:41.570188   10910 config.go:182] Loaded profile config "old-k8s-version-316000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 11:20:41.570234   10910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:20:41.574699   10910 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:20:41.579686   10910 start.go:297] selected driver: qemu2
	I0327 11:20:41.579691   10910 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:20:41.579696   10910 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:20:41.581979   10910 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:20:41.585741   10910 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:20:41.587306   10910 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:20:41.587356   10910 cni.go:84] Creating CNI manager for ""
	I0327 11:20:41.587364   10910 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:20:41.587368   10910 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:20:41.587392   10910 start.go:340] cluster config:
	{Name:no-preload-070000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:41.591821   10910 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:41.599776   10910 out.go:177] * Starting "no-preload-070000" primary control-plane node in "no-preload-070000" cluster
	I0327 11:20:41.603612   10910 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 11:20:41.603681   10910 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/no-preload-070000/config.json ...
	I0327 11:20:41.603696   10910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/no-preload-070000/config.json: {Name:mk56f0ef81dcf3bdd629c4b8440a599627ae316f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:20:41.603700   10910 cache.go:107] acquiring lock: {Name:mk0381760554e7607b6127c65d3e951c454618f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:41.603708   10910 cache.go:107] acquiring lock: {Name:mk6f02f0d6dbb12708827a9ac435473bdeff022e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:41.603711   10910 cache.go:107] acquiring lock: {Name:mka0f2f8e11e0eb72f711f85d8719c830c498994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:41.603759   10910 cache.go:115] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0327 11:20:41.603764   10910 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 67.833µs
	I0327 11:20:41.603770   10910 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0327 11:20:41.603781   10910 cache.go:107] acquiring lock: {Name:mk9e1b64b99fe11f904ddce72b51aaad8f118289 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:41.603868   10910 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0327 11:20:41.603914   10910 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0327 11:20:41.603943   10910 cache.go:107] acquiring lock: {Name:mkbc30056f616060fa94961831c512d4320327bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:41.603971   10910 start.go:360] acquireMachinesLock for no-preload-070000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:41.604002   10910 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "no-preload-070000"
	I0327 11:20:41.603980   10910 cache.go:107] acquiring lock: {Name:mkb46484565a7d2fb472bc8874d6f039ce8ef513 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:41.604003   10910 cache.go:107] acquiring lock: {Name:mk70b44e6f68330b5c5affe2098edb314af1f45f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:41.604015   10910 start.go:93] Provisioning new machine with config: &{Name:no-preload-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 Cluster
Name:no-preload-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:41.604047   10910 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:41.604057   10910 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0327 11:20:41.604062   10910 cache.go:107] acquiring lock: {Name:mk6705a8c85a82585cfd3e6b063296f34d595cd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:41.607734   10910 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:20:41.604060   10910 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0327 11:20:41.604173   10910 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0327 11:20:41.604191   10910 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0327 11:20:41.604191   10910 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0327 11:20:41.614154   10910 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0327 11:20:41.614170   10910 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0327 11:20:41.614198   10910 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0327 11:20:41.614333   10910 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0327 11:20:41.614571   10910 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0327 11:20:41.617181   10910 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0327 11:20:41.617469   10910 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0327 11:20:41.624701   10910 start.go:159] libmachine.API.Create for "no-preload-070000" (driver="qemu2")
	I0327 11:20:41.624722   10910 client.go:168] LocalClient.Create starting
	I0327 11:20:41.624791   10910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:41.624820   10910 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:41.624829   10910 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:41.624877   10910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:41.624897   10910 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:41.624908   10910 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:41.625265   10910 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:41.785125   10910 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:41.949228   10910 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:41.949247   10910 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:41.949744   10910 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2
	I0327 11:20:41.961815   10910 main.go:141] libmachine: STDOUT: 
	I0327 11:20:41.961837   10910 main.go:141] libmachine: STDERR: 
	I0327 11:20:41.961886   10910 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2 +20000M
	I0327 11:20:41.972613   10910 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:41.972635   10910 main.go:141] libmachine: STDERR: 
	I0327 11:20:41.972653   10910 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2
	I0327 11:20:41.972658   10910 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:41.972690   10910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:e3:77:11:89:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2
	I0327 11:20:41.974558   10910 main.go:141] libmachine: STDOUT: 
	I0327 11:20:41.974573   10910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:41.974592   10910 client.go:171] duration metric: took 349.86725ms to LocalClient.Create
	I0327 11:20:43.269591   10910 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0327 11:20:43.578489   10910 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0327 11:20:43.651404   10910 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0327 11:20:43.681403   10910 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0327 11:20:43.702901   10910 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0327 11:20:43.722724   10910 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0327 11:20:43.730120   10910 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0327 11:20:43.844552   10910 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0327 11:20:43.844603   10910 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.240716292s
	I0327 11:20:43.844652   10910 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0327 11:20:43.974989   10910 start.go:128] duration metric: took 2.370943625s to createHost
	I0327 11:20:43.975032   10910 start.go:83] releasing machines lock for "no-preload-070000", held for 2.371038209s
	W0327 11:20:43.975095   10910 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:43.986057   10910 out.go:177] * Deleting "no-preload-070000" in qemu2 ...
	W0327 11:20:44.019137   10910 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:44.019169   10910 start.go:728] Will try again in 5 seconds ...
	I0327 11:20:44.816495   10910 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0327 11:20:44.816538   10910 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.212509458s
	I0327 11:20:44.816586   10910 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0327 11:20:46.732573   10910 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0327 11:20:46.732608   10910 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 5.128934042s
	I0327 11:20:46.732622   10910 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0327 11:20:47.738725   10910 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0327 11:20:47.738796   10910 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 6.134943875s
	I0327 11:20:47.738824   10910 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0327 11:20:48.215950   10910 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0327 11:20:48.216002   10910 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 6.612097875s
	I0327 11:20:48.216031   10910 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0327 11:20:49.019369   10910 start.go:360] acquireMachinesLock for no-preload-070000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:49.019767   10910 start.go:364] duration metric: took 317.292µs to acquireMachinesLock for "no-preload-070000"
	I0327 11:20:49.019894   10910 start.go:93] Provisioning new machine with config: &{Name:no-preload-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 Cluster
Name:no-preload-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:49.020124   10910 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:49.030474   10910 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:20:49.080577   10910 start.go:159] libmachine.API.Create for "no-preload-070000" (driver="qemu2")
	I0327 11:20:49.080630   10910 client.go:168] LocalClient.Create starting
	I0327 11:20:49.080758   10910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:49.080822   10910 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:49.080845   10910 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:49.080915   10910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:49.080967   10910 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:49.080980   10910 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:49.081451   10910 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:49.243777   10910 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:49.306905   10910 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:49.306913   10910 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:49.307073   10910 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2
	I0327 11:20:49.319884   10910 main.go:141] libmachine: STDOUT: 
	I0327 11:20:49.319919   10910 main.go:141] libmachine: STDERR: 
	I0327 11:20:49.319988   10910 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2 +20000M
	I0327 11:20:49.331003   10910 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:49.331028   10910 main.go:141] libmachine: STDERR: 
	I0327 11:20:49.331042   10910 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2
	I0327 11:20:49.331057   10910 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:49.331113   10910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a8:73:86:56:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2
	I0327 11:20:49.333044   10910 main.go:141] libmachine: STDOUT: 
	I0327 11:20:49.333059   10910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:49.333074   10910 client.go:171] duration metric: took 252.440916ms to LocalClient.Create
	I0327 11:20:49.957988   10910 cache.go:157] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0327 11:20:49.958065   10910 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 8.354413917s
	I0327 11:20:49.958106   10910 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0327 11:20:51.333445   10910 start.go:128] duration metric: took 2.313301541s to createHost
	I0327 11:20:51.333515   10910 start.go:83] releasing machines lock for "no-preload-070000", held for 2.313732875s
	W0327 11:20:51.333799   10910 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:51.343338   10910 out.go:177] 
	W0327 11:20:51.351413   10910 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:20:51.351449   10910 out.go:239] * 
	* 
	W0327 11:20:51.354209   10910 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:20:51.362221   10910 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-070000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (67.650792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-316000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (34.010166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-316000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-316000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-316000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.458292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-316000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-316000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (30.869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-316000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (31.246667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-316000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-316000 --alsologtostderr -v=1: exit status 83 (50.247958ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-316000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-316000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:20:45.640519   10961 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:20:45.640900   10961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:45.640903   10961 out.go:304] Setting ErrFile to fd 2...
	I0327 11:20:45.640906   10961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:45.641082   10961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:20:45.641288   10961 out.go:298] Setting JSON to false
	I0327 11:20:45.641298   10961 mustload.go:65] Loading cluster: old-k8s-version-316000
	I0327 11:20:45.641495   10961 config.go:182] Loaded profile config "old-k8s-version-316000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 11:20:45.645326   10961 out.go:177] * The control-plane node old-k8s-version-316000 host is not running: state=Stopped
	I0327 11:20:45.656279   10961 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-316000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-316000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (31.176166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (31.162708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-316000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-246000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-246000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (10.009100833s)

                                                
                                                
-- stdout --
	* [embed-certs-246000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-246000" primary control-plane node in "embed-certs-246000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-246000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:20:46.117984   10984 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:20:46.118120   10984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:46.118123   10984 out.go:304] Setting ErrFile to fd 2...
	I0327 11:20:46.118125   10984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:46.118281   10984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:20:46.119296   10984 out.go:298] Setting JSON to false
	I0327 11:20:46.135816   10984 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6619,"bootTime":1711557027,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:20:46.135885   10984 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:20:46.139076   10984 out.go:177] * [embed-certs-246000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:20:46.152084   10984 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:20:46.148049   10984 notify.go:220] Checking for updates...
	I0327 11:20:46.159094   10984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:20:46.166177   10984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:20:46.172149   10984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:20:46.179170   10984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:20:46.183090   10984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:20:46.185099   10984 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:20:46.185160   10984 config.go:182] Loaded profile config "no-preload-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 11:20:46.185217   10984 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:20:46.189136   10984 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:20:46.197118   10984 start.go:297] selected driver: qemu2
	I0327 11:20:46.197124   10984 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:20:46.197129   10984 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:20:46.199480   10984 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:20:46.204159   10984 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:20:46.208246   10984 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:20:46.208281   10984 cni.go:84] Creating CNI manager for ""
	I0327 11:20:46.208289   10984 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:20:46.208292   10984 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:20:46.208329   10984 start.go:340] cluster config:
	{Name:embed-certs-246000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client So
cketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:46.213098   10984 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:46.221174   10984 out.go:177] * Starting "embed-certs-246000" primary control-plane node in "embed-certs-246000" cluster
	I0327 11:20:46.224098   10984 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:20:46.224112   10984 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:20:46.224116   10984 cache.go:56] Caching tarball of preloaded images
	I0327 11:20:46.224170   10984 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:20:46.224176   10984 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:20:46.224243   10984 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/embed-certs-246000/config.json ...
	I0327 11:20:46.224254   10984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/embed-certs-246000/config.json: {Name:mk3cbc9c21990078745e5113fb7d8388b0966cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:20:46.224490   10984 start.go:360] acquireMachinesLock for embed-certs-246000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:46.224523   10984 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "embed-certs-246000"
	I0327 11:20:46.224537   10984 start.go:93] Provisioning new machine with config: &{Name:embed-certs-246000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:em
bed-certs-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:46.224568   10984 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:46.229131   10984 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:20:46.245646   10984 start.go:159] libmachine.API.Create for "embed-certs-246000" (driver="qemu2")
	I0327 11:20:46.245671   10984 client.go:168] LocalClient.Create starting
	I0327 11:20:46.245744   10984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:46.245771   10984 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:46.245780   10984 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:46.245820   10984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:46.245841   10984 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:46.245848   10984 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:46.246168   10984 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:46.398571   10984 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:46.483882   10984 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:46.483889   10984 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:46.484054   10984 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2
	I0327 11:20:46.496520   10984 main.go:141] libmachine: STDOUT: 
	I0327 11:20:46.496539   10984 main.go:141] libmachine: STDERR: 
	I0327 11:20:46.496608   10984 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2 +20000M
	I0327 11:20:46.507575   10984 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:46.507592   10984 main.go:141] libmachine: STDERR: 
	I0327 11:20:46.507603   10984 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2
	I0327 11:20:46.507608   10984 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:46.507638   10984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:77:09:21:a1:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2
	I0327 11:20:46.509443   10984 main.go:141] libmachine: STDOUT: 
	I0327 11:20:46.509457   10984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:46.509478   10984 client.go:171] duration metric: took 263.804542ms to LocalClient.Create
	I0327 11:20:48.511632   10984 start.go:128] duration metric: took 2.287038625s to createHost
	I0327 11:20:48.511687   10984 start.go:83] releasing machines lock for "embed-certs-246000", held for 2.287168875s
	W0327 11:20:48.511759   10984 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:48.519982   10984 out.go:177] * Deleting "embed-certs-246000" in qemu2 ...
	W0327 11:20:48.553836   10984 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:48.553869   10984 start.go:728] Will try again in 5 seconds ...
	I0327 11:20:53.554906   10984 start.go:360] acquireMachinesLock for embed-certs-246000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:53.555221   10984 start.go:364] duration metric: took 236.041µs to acquireMachinesLock for "embed-certs-246000"
	I0327 11:20:53.555323   10984 start.go:93] Provisioning new machine with config: &{Name:embed-certs-246000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:em
bed-certs-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:20:53.555576   10984 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:20:53.565442   10984 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:20:53.605260   10984 start.go:159] libmachine.API.Create for "embed-certs-246000" (driver="qemu2")
	I0327 11:20:53.605322   10984 client.go:168] LocalClient.Create starting
	I0327 11:20:53.605403   10984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:20:53.605447   10984 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:53.605468   10984 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:53.605544   10984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:20:53.605572   10984 main.go:141] libmachine: Decoding PEM data...
	I0327 11:20:53.605596   10984 main.go:141] libmachine: Parsing certificate...
	I0327 11:20:53.606124   10984 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:20:53.931045   10984 main.go:141] libmachine: Creating SSH key...
	I0327 11:20:54.017741   10984 main.go:141] libmachine: Creating Disk image...
	I0327 11:20:54.017752   10984 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:20:54.017933   10984 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2
	I0327 11:20:54.030357   10984 main.go:141] libmachine: STDOUT: 
	I0327 11:20:54.030378   10984 main.go:141] libmachine: STDERR: 
	I0327 11:20:54.030423   10984 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2 +20000M
	I0327 11:20:54.041343   10984 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:20:54.041359   10984 main.go:141] libmachine: STDERR: 
	I0327 11:20:54.041371   10984 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2
	I0327 11:20:54.041376   10984 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:20:54.041423   10984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:e6:b1:e5:22:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2
	I0327 11:20:54.043184   10984 main.go:141] libmachine: STDOUT: 
	I0327 11:20:54.043200   10984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:54.043214   10984 client.go:171] duration metric: took 437.890917ms to LocalClient.Create
	I0327 11:20:56.045366   10984 start.go:128] duration metric: took 2.48978125s to createHost
	I0327 11:20:56.045426   10984 start.go:83] releasing machines lock for "embed-certs-246000", held for 2.490193458s
	W0327 11:20:56.045748   10984 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:56.058981   10984 out.go:177] 
	W0327 11:20:56.064569   10984 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:20:56.064650   10984 out.go:239] * 
	* 
	W0327 11:20:56.067698   10984 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:20:56.078484   10984 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-246000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (59.247458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-070000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-070000 create -f testdata/busybox.yaml: exit status 1 (28.524208ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-070000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-070000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (30.933041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (31.497167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-070000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-070000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-070000 describe deploy/metrics-server -n kube-system: exit status 1 (26.961375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-070000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-070000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (31.14275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-070000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-070000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (7.366750666s)

                                                
                                                
-- stdout --
	* [no-preload-070000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-070000" primary control-plane node in "no-preload-070000" cluster
	* Restarting existing qemu2 VM for "no-preload-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:20:53.803240   11034 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:20:53.803431   11034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:53.803434   11034 out.go:304] Setting ErrFile to fd 2...
	I0327 11:20:53.803436   11034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:53.803582   11034 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:20:53.804815   11034 out.go:298] Setting JSON to false
	I0327 11:20:53.823966   11034 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6626,"bootTime":1711557027,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:20:53.824051   11034 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:20:53.831594   11034 out.go:177] * [no-preload-070000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:20:53.845514   11034 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:20:53.840586   11034 notify.go:220] Checking for updates...
	I0327 11:20:53.852486   11034 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:20:53.860434   11034 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:20:53.868538   11034 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:20:53.876473   11034 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:20:53.884488   11034 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:20:53.888874   11034 config.go:182] Loaded profile config "no-preload-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 11:20:53.889238   11034 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:20:53.893554   11034 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:20:53.900510   11034 start.go:297] selected driver: qemu2
	I0327 11:20:53.900518   11034 start.go:901] validating driver "qemu2" against &{Name:no-preload-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterNam
e:no-preload-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:53.900627   11034 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:20:53.903393   11034 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:20:53.903458   11034 cni.go:84] Creating CNI manager for ""
	I0327 11:20:53.903468   11034 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:20:53.903504   11034 start.go:340] cluster config:
	{Name:no-preload-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:53.908637   11034 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:53.917504   11034 out.go:177] * Starting "no-preload-070000" primary control-plane node in "no-preload-070000" cluster
	I0327 11:20:53.921524   11034 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 11:20:53.921653   11034 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/no-preload-070000/config.json ...
	I0327 11:20:53.921696   11034 cache.go:107] acquiring lock: {Name:mk0381760554e7607b6127c65d3e951c454618f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:53.921782   11034 cache.go:115] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0327 11:20:53.921791   11034 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.333µs
	I0327 11:20:53.921784   11034 cache.go:107] acquiring lock: {Name:mk9e1b64b99fe11f904ddce72b51aaad8f118289 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:53.921798   11034 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0327 11:20:53.921805   11034 cache.go:107] acquiring lock: {Name:mkbc30056f616060fa94961831c512d4320327bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:53.921752   11034 cache.go:107] acquiring lock: {Name:mk70b44e6f68330b5c5affe2098edb314af1f45f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:53.921861   11034 cache.go:107] acquiring lock: {Name:mk6705a8c85a82585cfd3e6b063296f34d595cd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:53.921884   11034 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0327 11:20:53.921900   11034 cache.go:115] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0327 11:20:53.921904   11034 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 190.834µs
	I0327 11:20:53.921908   11034 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0327 11:20:53.921846   11034 cache.go:115] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0327 11:20:53.921915   11034 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 111.125µs
	I0327 11:20:53.921919   11034 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0327 11:20:53.921909   11034 cache.go:107] acquiring lock: {Name:mka0f2f8e11e0eb72f711f85d8719c830c498994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:53.921901   11034 cache.go:107] acquiring lock: {Name:mk6f02f0d6dbb12708827a9ac435473bdeff022e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:53.921929   11034 cache.go:107] acquiring lock: {Name:mkb46484565a7d2fb472bc8874d6f039ce8ef513 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:53.921962   11034 cache.go:115] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0327 11:20:53.921976   11034 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 199.834µs
	I0327 11:20:53.921983   11034 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0327 11:20:53.922048   11034 cache.go:115] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0327 11:20:53.922061   11034 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 199.625µs
	I0327 11:20:53.922068   11034 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0327 11:20:53.922079   11034 cache.go:115] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0327 11:20:53.922084   11034 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 257µs
	I0327 11:20:53.922087   11034 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0327 11:20:53.922173   11034 cache.go:115] /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0327 11:20:53.922180   11034 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 373.125µs
	I0327 11:20:53.922188   11034 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0327 11:20:53.922340   11034 start.go:360] acquireMachinesLock for no-preload-070000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:53.925612   11034 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0327 11:20:55.884619   11034 cache.go:162] opening:  /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0327 11:20:56.045604   11034 start.go:364] duration metric: took 2.123244459s to acquireMachinesLock for "no-preload-070000"
	I0327 11:20:56.045751   11034 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:20:56.045780   11034 fix.go:54] fixHost starting: 
	I0327 11:20:56.046384   11034 fix.go:112] recreateIfNeeded on no-preload-070000: state=Stopped err=<nil>
	W0327 11:20:56.046413   11034 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:20:56.058981   11034 out.go:177] * Restarting existing qemu2 VM for "no-preload-070000" ...
	I0327 11:20:56.064564   11034 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a8:73:86:56:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2
	I0327 11:20:56.074857   11034 main.go:141] libmachine: STDOUT: 
	I0327 11:20:56.075128   11034 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:56.075254   11034 fix.go:56] duration metric: took 29.474542ms for fixHost
	I0327 11:20:56.075274   11034 start.go:83] releasing machines lock for "no-preload-070000", held for 29.640334ms
	W0327 11:20:56.075311   11034 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:20:56.075679   11034 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:56.075698   11034 start.go:728] Will try again in 5 seconds ...
	I0327 11:21:01.076167   11034 start.go:360] acquireMachinesLock for no-preload-070000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:01.076571   11034 start.go:364] duration metric: took 312.709µs to acquireMachinesLock for "no-preload-070000"
	I0327 11:21:01.076702   11034 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:21:01.076723   11034 fix.go:54] fixHost starting: 
	I0327 11:21:01.077404   11034 fix.go:112] recreateIfNeeded on no-preload-070000: state=Stopped err=<nil>
	W0327 11:21:01.077432   11034 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:21:01.084858   11034 out.go:177] * Restarting existing qemu2 VM for "no-preload-070000" ...
	I0327 11:21:01.089933   11034 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a8:73:86:56:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/no-preload-070000/disk.qcow2
	I0327 11:21:01.100139   11034 main.go:141] libmachine: STDOUT: 
	I0327 11:21:01.100532   11034 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:01.100636   11034 fix.go:56] duration metric: took 23.912667ms for fixHost
	I0327 11:21:01.100661   11034 start.go:83] releasing machines lock for "no-preload-070000", held for 24.065291ms
	W0327 11:21:01.100855   11034 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-070000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-070000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:01.108872   11034 out.go:177] 
	W0327 11:21:01.112923   11034 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:21:01.113062   11034 out.go:239] * 
	* 
	W0327 11:21:01.116068   11034 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:21:01.123637   11034 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-070000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (68.716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-246000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-246000 create -f testdata/busybox.yaml: exit status 1 (28.382291ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-246000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-246000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (30.510167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (30.881792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-246000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-246000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-246000 describe deploy/metrics-server -n kube-system: exit status 1 (26.354083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-246000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-246000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (31.174417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-246000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-246000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (6.058146541s)

                                                
                                                
-- stdout --
	* [embed-certs-246000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-246000" primary control-plane node in "embed-certs-246000" cluster
	* Restarting existing qemu2 VM for "embed-certs-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:20:58.532171   11076 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:20:58.532306   11076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:58.532310   11076 out.go:304] Setting ErrFile to fd 2...
	I0327 11:20:58.532312   11076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:20:58.532460   11076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:20:58.533488   11076 out.go:298] Setting JSON to false
	I0327 11:20:58.549406   11076 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6631,"bootTime":1711557027,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:20:58.549476   11076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:20:58.554332   11076 out.go:177] * [embed-certs-246000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:20:58.564294   11076 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:20:58.561325   11076 notify.go:220] Checking for updates...
	I0327 11:20:58.570303   11076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:20:58.577216   11076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:20:58.584333   11076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:20:58.591333   11076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:20:58.595290   11076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:20:58.599579   11076 config.go:182] Loaded profile config "embed-certs-246000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:20:58.599832   11076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:20:58.604525   11076 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:20:58.612339   11076 start.go:297] selected driver: qemu2
	I0327 11:20:58.612344   11076 start.go:901] validating driver "qemu2" against &{Name:embed-certs-246000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed
-certs-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:58.612402   11076 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:20:58.614740   11076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:20:58.614785   11076 cni.go:84] Creating CNI manager for ""
	I0327 11:20:58.614792   11076 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:20:58.614813   11076 start.go:340] cluster config:
	{Name:embed-certs-246000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:20:58.619286   11076 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:20:58.627309   11076 out.go:177] * Starting "embed-certs-246000" primary control-plane node in "embed-certs-246000" cluster
	I0327 11:20:58.631345   11076 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:20:58.631360   11076 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:20:58.631371   11076 cache.go:56] Caching tarball of preloaded images
	I0327 11:20:58.631436   11076 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:20:58.631443   11076 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:20:58.631524   11076 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/embed-certs-246000/config.json ...
	I0327 11:20:58.632033   11076 start.go:360] acquireMachinesLock for embed-certs-246000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:20:58.632065   11076 start.go:364] duration metric: took 24µs to acquireMachinesLock for "embed-certs-246000"
	I0327 11:20:58.632076   11076 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:20:58.632080   11076 fix.go:54] fixHost starting: 
	I0327 11:20:58.632204   11076 fix.go:112] recreateIfNeeded on embed-certs-246000: state=Stopped err=<nil>
	W0327 11:20:58.632217   11076 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:20:58.635446   11076 out.go:177] * Restarting existing qemu2 VM for "embed-certs-246000" ...
	I0327 11:20:58.643326   11076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:e6:b1:e5:22:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2
	I0327 11:20:58.645336   11076 main.go:141] libmachine: STDOUT: 
	I0327 11:20:58.645359   11076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:20:58.645387   11076 fix.go:56] duration metric: took 13.306125ms for fixHost
	I0327 11:20:58.645391   11076 start.go:83] releasing machines lock for "embed-certs-246000", held for 13.3215ms
	W0327 11:20:58.645398   11076 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:20:58.645431   11076 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:20:58.645436   11076 start.go:728] Will try again in 5 seconds ...
	I0327 11:21:03.647583   11076 start.go:360] acquireMachinesLock for embed-certs-246000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:04.482184   11076 start.go:364] duration metric: took 834.486667ms to acquireMachinesLock for "embed-certs-246000"
	I0327 11:21:04.482337   11076 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:21:04.482356   11076 fix.go:54] fixHost starting: 
	I0327 11:21:04.483053   11076 fix.go:112] recreateIfNeeded on embed-certs-246000: state=Stopped err=<nil>
	W0327 11:21:04.483080   11076 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:21:04.491729   11076 out.go:177] * Restarting existing qemu2 VM for "embed-certs-246000" ...
	I0327 11:21:04.504990   11076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:e6:b1:e5:22:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/embed-certs-246000/disk.qcow2
	I0327 11:21:04.515158   11076 main.go:141] libmachine: STDOUT: 
	I0327 11:21:04.515220   11076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:04.515298   11076 fix.go:56] duration metric: took 32.941667ms for fixHost
	I0327 11:21:04.515316   11076 start.go:83] releasing machines lock for "embed-certs-246000", held for 33.099208ms
	W0327 11:21:04.515530   11076 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-246000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-246000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:04.524730   11076 out.go:177] 
	W0327 11:21:04.530631   11076 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:21:04.530657   11076 out.go:239] * 
	* 
	W0327 11:21:04.532957   11076 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:21:04.545675   11076 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-246000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (63.7385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-070000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (34.076375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-070000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-070000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-070000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.304125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-070000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-070000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (30.66975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-070000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (31.233875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-070000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-070000 --alsologtostderr -v=1: exit status 83 (44.195333ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-070000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-070000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:21:01.402874   11095 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:21:01.403041   11095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:01.403049   11095 out.go:304] Setting ErrFile to fd 2...
	I0327 11:21:01.403052   11095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:01.403174   11095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:21:01.403396   11095 out.go:298] Setting JSON to false
	I0327 11:21:01.403406   11095 mustload.go:65] Loading cluster: no-preload-070000
	I0327 11:21:01.403593   11095 config.go:182] Loaded profile config "no-preload-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 11:21:01.407883   11095 out.go:177] * The control-plane node no-preload-070000 host is not running: state=Stopped
	I0327 11:21:01.411956   11095 out.go:177]   To start a cluster, run: "minikube start -p no-preload-070000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-070000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (31.05175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (30.906833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-657000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-657000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.963668042s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-657000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-657000" primary control-plane node in "default-k8s-diff-port-657000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-657000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:21:02.111488   11130 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:21:02.111617   11130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:02.111620   11130 out.go:304] Setting ErrFile to fd 2...
	I0327 11:21:02.111623   11130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:02.111747   11130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:21:02.112895   11130 out.go:298] Setting JSON to false
	I0327 11:21:02.129082   11130 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6635,"bootTime":1711557027,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:21:02.129144   11130 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:21:02.133886   11130 out.go:177] * [default-k8s-diff-port-657000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:21:02.138841   11130 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:21:02.138898   11130 notify.go:220] Checking for updates...
	I0327 11:21:02.146730   11130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:21:02.150846   11130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:21:02.153784   11130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:21:02.156789   11130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:21:02.159809   11130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:21:02.163205   11130 config.go:182] Loaded profile config "embed-certs-246000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:21:02.163265   11130 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:21:02.163317   11130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:21:02.167819   11130 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:21:02.174828   11130 start.go:297] selected driver: qemu2
	I0327 11:21:02.174834   11130 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:21:02.174840   11130 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:21:02.177155   11130 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 11:21:02.180798   11130 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:21:02.183909   11130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:21:02.183963   11130 cni.go:84] Creating CNI manager for ""
	I0327 11:21:02.183970   11130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:21:02.183975   11130 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:21:02.184000   11130 start.go:340] cluster config:
	{Name:default-k8s-diff-port-657000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:21:02.188423   11130 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:21:02.193759   11130 out.go:177] * Starting "default-k8s-diff-port-657000" primary control-plane node in "default-k8s-diff-port-657000" cluster
	I0327 11:21:02.197800   11130 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:21:02.197818   11130 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:21:02.197823   11130 cache.go:56] Caching tarball of preloaded images
	I0327 11:21:02.197884   11130 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:21:02.197889   11130 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:21:02.197949   11130 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/default-k8s-diff-port-657000/config.json ...
	I0327 11:21:02.197959   11130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/default-k8s-diff-port-657000/config.json: {Name:mk0c8097eb7515156c652bc70367259fd691b92e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:21:02.198154   11130 start.go:360] acquireMachinesLock for default-k8s-diff-port-657000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:02.198186   11130 start.go:364] duration metric: took 24.958µs to acquireMachinesLock for "default-k8s-diff-port-657000"
	I0327 11:21:02.198199   11130 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-657000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:default-k8s-diff-port-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:21:02.198233   11130 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:21:02.205784   11130 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:21:02.222560   11130 start.go:159] libmachine.API.Create for "default-k8s-diff-port-657000" (driver="qemu2")
	I0327 11:21:02.222589   11130 client.go:168] LocalClient.Create starting
	I0327 11:21:02.222649   11130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:21:02.222679   11130 main.go:141] libmachine: Decoding PEM data...
	I0327 11:21:02.222690   11130 main.go:141] libmachine: Parsing certificate...
	I0327 11:21:02.222736   11130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:21:02.222758   11130 main.go:141] libmachine: Decoding PEM data...
	I0327 11:21:02.222765   11130 main.go:141] libmachine: Parsing certificate...
	I0327 11:21:02.223103   11130 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:21:02.375515   11130 main.go:141] libmachine: Creating SSH key...
	I0327 11:21:02.454499   11130 main.go:141] libmachine: Creating Disk image...
	I0327 11:21:02.454506   11130 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:21:02.454665   11130 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2
	I0327 11:21:02.466908   11130 main.go:141] libmachine: STDOUT: 
	I0327 11:21:02.466938   11130 main.go:141] libmachine: STDERR: 
	I0327 11:21:02.466994   11130 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2 +20000M
	I0327 11:21:02.477787   11130 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:21:02.477803   11130 main.go:141] libmachine: STDERR: 
	I0327 11:21:02.477821   11130 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2
	I0327 11:21:02.477829   11130 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:21:02.477860   11130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:63:7a:b9:48:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2
	I0327 11:21:02.479756   11130 main.go:141] libmachine: STDOUT: 
	I0327 11:21:02.479773   11130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:02.479798   11130 client.go:171] duration metric: took 257.20575ms to LocalClient.Create
	I0327 11:21:04.481965   11130 start.go:128] duration metric: took 2.2837255s to createHost
	I0327 11:21:04.482034   11130 start.go:83] releasing machines lock for "default-k8s-diff-port-657000", held for 2.283853625s
	W0327 11:21:04.482148   11130 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:04.500703   11130 out.go:177] * Deleting "default-k8s-diff-port-657000" in qemu2 ...
	W0327 11:21:04.557082   11130 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:04.557118   11130 start.go:728] Will try again in 5 seconds ...
	I0327 11:21:09.558522   11130 start.go:360] acquireMachinesLock for default-k8s-diff-port-657000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:09.558965   11130 start.go:364] duration metric: took 364.5µs to acquireMachinesLock for "default-k8s-diff-port-657000"
	I0327 11:21:09.559101   11130 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-657000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:default-k8s-diff-port-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:21:09.559344   11130 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:21:09.569006   11130 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:21:09.619702   11130 start.go:159] libmachine.API.Create for "default-k8s-diff-port-657000" (driver="qemu2")
	I0327 11:21:09.619757   11130 client.go:168] LocalClient.Create starting
	I0327 11:21:09.619877   11130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:21:09.619950   11130 main.go:141] libmachine: Decoding PEM data...
	I0327 11:21:09.619970   11130 main.go:141] libmachine: Parsing certificate...
	I0327 11:21:09.620038   11130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:21:09.620084   11130 main.go:141] libmachine: Decoding PEM data...
	I0327 11:21:09.620103   11130 main.go:141] libmachine: Parsing certificate...
	I0327 11:21:09.620745   11130 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:21:09.792051   11130 main.go:141] libmachine: Creating SSH key...
	I0327 11:21:09.975825   11130 main.go:141] libmachine: Creating Disk image...
	I0327 11:21:09.975832   11130 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:21:09.976016   11130 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2
	I0327 11:21:09.988938   11130 main.go:141] libmachine: STDOUT: 
	I0327 11:21:09.988967   11130 main.go:141] libmachine: STDERR: 
	I0327 11:21:09.989043   11130 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2 +20000M
	I0327 11:21:09.999897   11130 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:21:09.999922   11130 main.go:141] libmachine: STDERR: 
	I0327 11:21:09.999946   11130 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2
	I0327 11:21:09.999951   11130 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:21:09.999983   11130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:06:ec:7e:b9:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2
	I0327 11:21:10.001733   11130 main.go:141] libmachine: STDOUT: 
	I0327 11:21:10.001755   11130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:10.001769   11130 client.go:171] duration metric: took 382.008ms to LocalClient.Create
	I0327 11:21:12.003941   11130 start.go:128] duration metric: took 2.444582666s to createHost
	I0327 11:21:12.003999   11130 start.go:83] releasing machines lock for "default-k8s-diff-port-657000", held for 2.445027458s
	W0327 11:21:12.004417   11130 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-657000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-657000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:12.015917   11130 out.go:177] 
	W0327 11:21:12.020013   11130 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:21:12.020043   11130 out.go:239] * 
	* 
	W0327 11:21:12.022460   11130 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:21:12.032974   11130 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-657000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (71.316667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-246000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (32.870291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-246000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-246000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-246000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.117583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-246000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-246000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (31.259833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-246000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (31.129583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-246000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-246000 --alsologtostderr -v=1: exit status 83 (46.509542ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-246000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-246000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:21:04.819972   11152 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:21:04.820142   11152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:04.820146   11152 out.go:304] Setting ErrFile to fd 2...
	I0327 11:21:04.820148   11152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:04.820261   11152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:21:04.820492   11152 out.go:298] Setting JSON to false
	I0327 11:21:04.820502   11152 mustload.go:65] Loading cluster: embed-certs-246000
	I0327 11:21:04.820698   11152 config.go:182] Loaded profile config "embed-certs-246000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:21:04.825505   11152 out.go:177] * The control-plane node embed-certs-246000 host is not running: state=Stopped
	I0327 11:21:04.832544   11152 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-246000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-246000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (31.31425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (30.737375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-071000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-071000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (9.787248708s)

                                                
                                                
-- stdout --
	* [newest-cni-071000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-071000" primary control-plane node in "newest-cni-071000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-071000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:21:05.306927   11175 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:21:05.307106   11175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:05.307110   11175 out.go:304] Setting ErrFile to fd 2...
	I0327 11:21:05.307113   11175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:05.307242   11175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:21:05.308390   11175 out.go:298] Setting JSON to false
	I0327 11:21:05.324618   11175 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6638,"bootTime":1711557027,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:21:05.324679   11175 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:21:05.329738   11175 out.go:177] * [newest-cni-071000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:21:05.336817   11175 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:21:05.336873   11175 notify.go:220] Checking for updates...
	I0327 11:21:05.344717   11175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:21:05.347810   11175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:21:05.350837   11175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:21:05.353778   11175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:21:05.356819   11175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:21:05.367739   11175 config.go:182] Loaded profile config "default-k8s-diff-port-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:21:05.367799   11175 config.go:182] Loaded profile config "multinode-906000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:21:05.367869   11175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:21:05.371828   11175 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 11:21:05.378735   11175 start.go:297] selected driver: qemu2
	I0327 11:21:05.378747   11175 start.go:901] validating driver "qemu2" against <nil>
	I0327 11:21:05.378753   11175 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:21:05.381280   11175 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0327 11:21:05.381309   11175 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0327 11:21:05.389792   11175 out.go:177] * Automatically selected the socket_vmnet network
	I0327 11:21:05.392822   11175 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0327 11:21:05.392865   11175 cni.go:84] Creating CNI manager for ""
	I0327 11:21:05.392879   11175 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:21:05.392884   11175 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 11:21:05.392923   11175 start.go:340] cluster config:
	{Name:newest-cni-071000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-071000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:21:05.398212   11175 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:21:05.405741   11175 out.go:177] * Starting "newest-cni-071000" primary control-plane node in "newest-cni-071000" cluster
	I0327 11:21:05.409767   11175 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 11:21:05.409785   11175 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 11:21:05.409800   11175 cache.go:56] Caching tarball of preloaded images
	I0327 11:21:05.409866   11175 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:21:05.409872   11175 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 11:21:05.409934   11175 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/newest-cni-071000/config.json ...
	I0327 11:21:05.409950   11175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/newest-cni-071000/config.json: {Name:mk20821e89a19f7dd9496ac3dcdd6bb3c32cea82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 11:21:05.410186   11175 start.go:360] acquireMachinesLock for newest-cni-071000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:05.410220   11175 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "newest-cni-071000"
	I0327 11:21:05.410235   11175 start.go:93] Provisioning new machine with config: &{Name:newest-cni-071000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 Cluster
Name:newest-cni-071000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:21:05.410267   11175 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:21:05.417757   11175 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:21:05.436496   11175 start.go:159] libmachine.API.Create for "newest-cni-071000" (driver="qemu2")
	I0327 11:21:05.436528   11175 client.go:168] LocalClient.Create starting
	I0327 11:21:05.436593   11175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:21:05.436624   11175 main.go:141] libmachine: Decoding PEM data...
	I0327 11:21:05.436635   11175 main.go:141] libmachine: Parsing certificate...
	I0327 11:21:05.436684   11175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:21:05.436708   11175 main.go:141] libmachine: Decoding PEM data...
	I0327 11:21:05.436715   11175 main.go:141] libmachine: Parsing certificate...
	I0327 11:21:05.437125   11175 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:21:05.592468   11175 main.go:141] libmachine: Creating SSH key...
	I0327 11:21:05.642381   11175 main.go:141] libmachine: Creating Disk image...
	I0327 11:21:05.642387   11175 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:21:05.642551   11175 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2
	I0327 11:21:05.654612   11175 main.go:141] libmachine: STDOUT: 
	I0327 11:21:05.654634   11175 main.go:141] libmachine: STDERR: 
	I0327 11:21:05.654681   11175 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2 +20000M
	I0327 11:21:05.665519   11175 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:21:05.665541   11175 main.go:141] libmachine: STDERR: 
	I0327 11:21:05.665554   11175 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2
	I0327 11:21:05.665560   11175 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:21:05.665590   11175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ea:bf:42:d7:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2
	I0327 11:21:05.667454   11175 main.go:141] libmachine: STDOUT: 
	I0327 11:21:05.667470   11175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:05.667488   11175 client.go:171] duration metric: took 230.956417ms to LocalClient.Create
	I0327 11:21:07.669656   11175 start.go:128] duration metric: took 2.259386167s to createHost
	I0327 11:21:07.669743   11175 start.go:83] releasing machines lock for "newest-cni-071000", held for 2.259527625s
	W0327 11:21:07.669807   11175 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:07.684854   11175 out.go:177] * Deleting "newest-cni-071000" in qemu2 ...
	W0327 11:21:07.713237   11175 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:07.713259   11175 start.go:728] Will try again in 5 seconds ...
	I0327 11:21:12.715459   11175 start.go:360] acquireMachinesLock for newest-cni-071000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:12.715871   11175 start.go:364] duration metric: took 289.084µs to acquireMachinesLock for "newest-cni-071000"
	I0327 11:21:12.716021   11175 start.go:93] Provisioning new machine with config: &{Name:newest-cni-071000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 Cluster
Name:newest-cni-071000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 11:21:12.716347   11175 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 11:21:12.722172   11175 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 11:21:12.772326   11175 start.go:159] libmachine.API.Create for "newest-cni-071000" (driver="qemu2")
	I0327 11:21:12.772433   11175 client.go:168] LocalClient.Create starting
	I0327 11:21:12.772568   11175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/ca.pem
	I0327 11:21:12.772622   11175 main.go:141] libmachine: Decoding PEM data...
	I0327 11:21:12.772643   11175 main.go:141] libmachine: Parsing certificate...
	I0327 11:21:12.772727   11175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18517-6013/.minikube/certs/cert.pem
	I0327 11:21:12.772761   11175 main.go:141] libmachine: Decoding PEM data...
	I0327 11:21:12.772772   11175 main.go:141] libmachine: Parsing certificate...
	I0327 11:21:12.773538   11175 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso...
	I0327 11:21:12.945513   11175 main.go:141] libmachine: Creating SSH key...
	I0327 11:21:12.986675   11175 main.go:141] libmachine: Creating Disk image...
	I0327 11:21:12.986681   11175 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 11:21:12.986858   11175 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2.raw /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2
	I0327 11:21:12.999254   11175 main.go:141] libmachine: STDOUT: 
	I0327 11:21:12.999276   11175 main.go:141] libmachine: STDERR: 
	I0327 11:21:12.999334   11175 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2 +20000M
	I0327 11:21:13.009900   11175 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 11:21:13.009917   11175 main.go:141] libmachine: STDERR: 
	I0327 11:21:13.009926   11175 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2
	I0327 11:21:13.009931   11175 main.go:141] libmachine: Starting QEMU VM...
	I0327 11:21:13.009972   11175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:9f:81:85:0a:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2
	I0327 11:21:13.011661   11175 main.go:141] libmachine: STDOUT: 
	I0327 11:21:13.011676   11175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:13.011688   11175 client.go:171] duration metric: took 239.2415ms to LocalClient.Create
	I0327 11:21:15.013869   11175 start.go:128] duration metric: took 2.297504709s to createHost
	I0327 11:21:15.013970   11175 start.go:83] releasing machines lock for "newest-cni-071000", held for 2.29808575s
	W0327 11:21:15.014336   11175 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-071000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-071000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:15.029206   11175 out.go:177] 
	W0327 11:21:15.033245   11175 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:21:15.033309   11175 out.go:239] * 
	* 
	W0327 11:21:15.036212   11175 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:21:15.049164   11175 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-071000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000: exit status 7 (65.997083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-071000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-657000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-657000 create -f testdata/busybox.yaml: exit status 1 (30.26675ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-657000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-657000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (31.18525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (30.792958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-657000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-657000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-657000 describe deploy/metrics-server -n kube-system: exit status 1 (27.003875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-657000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-657000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (31.036292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-657000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-657000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.187260209s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-657000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-657000" primary control-plane node in "default-k8s-diff-port-657000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:21:16.087391   11245 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:21:16.087518   11245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:16.087522   11245 out.go:304] Setting ErrFile to fd 2...
	I0327 11:21:16.087524   11245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:16.087636   11245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:21:16.088624   11245 out.go:298] Setting JSON to false
	I0327 11:21:16.104562   11245 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6649,"bootTime":1711557027,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:21:16.104619   11245 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:21:16.109327   11245 out.go:177] * [default-k8s-diff-port-657000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:21:16.117338   11245 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:21:16.120320   11245 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:21:16.117392   11245 notify.go:220] Checking for updates...
	I0327 11:21:16.127290   11245 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:21:16.130305   11245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:21:16.133356   11245 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:21:16.136273   11245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:21:16.139620   11245 config.go:182] Loaded profile config "default-k8s-diff-port-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:21:16.139876   11245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:21:16.143318   11245 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:21:16.150302   11245 start.go:297] selected driver: qemu2
	I0327 11:21:16.150307   11245 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-657000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluste
rName:default-k8s-diff-port-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:21:16.150363   11245 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:21:16.152674   11245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 11:21:16.152720   11245 cni.go:84] Creating CNI manager for ""
	I0327 11:21:16.152728   11245 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:21:16.152748   11245 start.go:340] cluster config:
	{Name:default-k8s-diff-port-657000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-657000 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:21:16.157060   11245 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:21:16.164323   11245 out.go:177] * Starting "default-k8s-diff-port-657000" primary control-plane node in "default-k8s-diff-port-657000" cluster
	I0327 11:21:16.168359   11245 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 11:21:16.168373   11245 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 11:21:16.168380   11245 cache.go:56] Caching tarball of preloaded images
	I0327 11:21:16.168434   11245 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:21:16.168439   11245 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 11:21:16.168501   11245 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/default-k8s-diff-port-657000/config.json ...
	I0327 11:21:16.168968   11245 start.go:360] acquireMachinesLock for default-k8s-diff-port-657000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:16.168995   11245 start.go:364] duration metric: took 21µs to acquireMachinesLock for "default-k8s-diff-port-657000"
	I0327 11:21:16.169006   11245 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:21:16.169012   11245 fix.go:54] fixHost starting: 
	I0327 11:21:16.169146   11245 fix.go:112] recreateIfNeeded on default-k8s-diff-port-657000: state=Stopped err=<nil>
	W0327 11:21:16.169155   11245 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:21:16.177359   11245 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-657000" ...
	I0327 11:21:16.181297   11245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:06:ec:7e:b9:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2
	I0327 11:21:16.183439   11245 main.go:141] libmachine: STDOUT: 
	I0327 11:21:16.183485   11245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:16.183516   11245 fix.go:56] duration metric: took 14.503959ms for fixHost
	I0327 11:21:16.183523   11245 start.go:83] releasing machines lock for "default-k8s-diff-port-657000", held for 14.523333ms
	W0327 11:21:16.183532   11245 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:21:16.183572   11245 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:16.183577   11245 start.go:728] Will try again in 5 seconds ...
	I0327 11:21:21.183787   11245 start.go:360] acquireMachinesLock for default-k8s-diff-port-657000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:21.184153   11245 start.go:364] duration metric: took 269.291µs to acquireMachinesLock for "default-k8s-diff-port-657000"
	I0327 11:21:21.184277   11245 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:21:21.184295   11245 fix.go:54] fixHost starting: 
	I0327 11:21:21.185033   11245 fix.go:112] recreateIfNeeded on default-k8s-diff-port-657000: state=Stopped err=<nil>
	W0327 11:21:21.185058   11245 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:21:21.194237   11245 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-657000" ...
	I0327 11:21:21.198659   11245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:06:ec:7e:b9:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/default-k8s-diff-port-657000/disk.qcow2
	I0327 11:21:21.208395   11245 main.go:141] libmachine: STDOUT: 
	I0327 11:21:21.208458   11245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:21.208536   11245 fix.go:56] duration metric: took 24.242542ms for fixHost
	I0327 11:21:21.208557   11245 start.go:83] releasing machines lock for "default-k8s-diff-port-657000", held for 24.380125ms
	W0327 11:21:21.208714   11245 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-657000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-657000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:21.216339   11245 out.go:177] 
	W0327 11:21:21.219482   11245 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:21:21.219506   11245 out.go:239] * 
	* 
	W0327 11:21:21.222074   11245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:21:21.230444   11245 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-657000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (67.940083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-071000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-071000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (5.18778175s)

                                                
                                                
-- stdout --
	* [newest-cni-071000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-071000" primary control-plane node in "newest-cni-071000" cluster
	* Restarting existing qemu2 VM for "newest-cni-071000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-071000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:21:18.454399   11268 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:21:18.454547   11268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:18.454551   11268 out.go:304] Setting ErrFile to fd 2...
	I0327 11:21:18.454552   11268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:18.454683   11268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:21:18.455708   11268 out.go:298] Setting JSON to false
	I0327 11:21:18.471641   11268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6651,"bootTime":1711557027,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 11:21:18.471711   11268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 11:21:18.476592   11268 out.go:177] * [newest-cni-071000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 11:21:18.484436   11268 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 11:21:18.487629   11268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 11:21:18.484511   11268 notify.go:220] Checking for updates...
	I0327 11:21:18.494463   11268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 11:21:18.498530   11268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 11:21:18.501599   11268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 11:21:18.504536   11268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 11:21:18.507843   11268 config.go:182] Loaded profile config "newest-cni-071000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 11:21:18.508099   11268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 11:21:18.511543   11268 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 11:21:18.518539   11268 start.go:297] selected driver: qemu2
	I0327 11:21:18.518546   11268 start.go:901] validating driver "qemu2" against &{Name:newest-cni-071000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterNam
e:newest-cni-071000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket
_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:21:18.518606   11268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 11:21:18.520942   11268 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0327 11:21:18.520987   11268 cni.go:84] Creating CNI manager for ""
	I0327 11:21:18.520995   11268 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 11:21:18.521017   11268 start.go:340] cluster config:
	{Name:newest-cni-071000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-071000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 11:21:18.525540   11268 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 11:21:18.532534   11268 out.go:177] * Starting "newest-cni-071000" primary control-plane node in "newest-cni-071000" cluster
	I0327 11:21:18.536592   11268 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 11:21:18.536606   11268 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 11:21:18.536619   11268 cache.go:56] Caching tarball of preloaded images
	I0327 11:21:18.536681   11268 preload.go:173] Found /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 11:21:18.536687   11268 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 11:21:18.536755   11268 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/newest-cni-071000/config.json ...
	I0327 11:21:18.537249   11268 start.go:360] acquireMachinesLock for newest-cni-071000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:18.537275   11268 start.go:364] duration metric: took 19.584µs to acquireMachinesLock for "newest-cni-071000"
	I0327 11:21:18.537284   11268 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:21:18.537289   11268 fix.go:54] fixHost starting: 
	I0327 11:21:18.537433   11268 fix.go:112] recreateIfNeeded on newest-cni-071000: state=Stopped err=<nil>
	W0327 11:21:18.537441   11268 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:21:18.540537   11268 out.go:177] * Restarting existing qemu2 VM for "newest-cni-071000" ...
	I0327 11:21:18.547566   11268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:9f:81:85:0a:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2
	I0327 11:21:18.549646   11268 main.go:141] libmachine: STDOUT: 
	I0327 11:21:18.549667   11268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:18.549694   11268 fix.go:56] duration metric: took 12.404458ms for fixHost
	I0327 11:21:18.549700   11268 start.go:83] releasing machines lock for "newest-cni-071000", held for 12.420958ms
	W0327 11:21:18.549708   11268 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:21:18.549739   11268 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:18.549743   11268 start.go:728] Will try again in 5 seconds ...
	I0327 11:21:23.550132   11268 start.go:360] acquireMachinesLock for newest-cni-071000: {Name:mk45a601bfd1701ab20d2a639c96fa7017594b13 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 11:21:23.550618   11268 start.go:364] duration metric: took 289.667µs to acquireMachinesLock for "newest-cni-071000"
	I0327 11:21:23.550753   11268 start.go:96] Skipping create...Using existing machine configuration
	I0327 11:21:23.550773   11268 fix.go:54] fixHost starting: 
	I0327 11:21:23.551515   11268 fix.go:112] recreateIfNeeded on newest-cni-071000: state=Stopped err=<nil>
	W0327 11:21:23.551545   11268 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 11:21:23.559933   11268 out.go:177] * Restarting existing qemu2 VM for "newest-cni-071000" ...
	I0327 11:21:23.564093   11268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:9f:81:85:0a:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18517-6013/.minikube/machines/newest-cni-071000/disk.qcow2
	I0327 11:21:23.574587   11268 main.go:141] libmachine: STDOUT: 
	I0327 11:21:23.574669   11268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 11:21:23.574787   11268 fix.go:56] duration metric: took 24.01375ms for fixHost
	I0327 11:21:23.574814   11268 start.go:83] releasing machines lock for "newest-cni-071000", held for 24.168708ms
	W0327 11:21:23.575066   11268 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-071000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-071000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 11:21:23.583934   11268 out.go:177] 
	W0327 11:21:23.586995   11268 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 11:21:23.587021   11268 out.go:239] * 
	* 
	W0327 11:21:23.589449   11268 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 11:21:23.597976   11268 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-071000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000: exit status 7 (70.678375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-071000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-657000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (33.142959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-657000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-657000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-657000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.782042ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-657000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-657000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (31.475167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-657000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (30.953042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-657000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-657000 --alsologtostderr -v=1: exit status 83 (43.544666ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-657000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-657000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:21:21.509135   11287 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:21:21.509263   11287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:21.509267   11287 out.go:304] Setting ErrFile to fd 2...
	I0327 11:21:21.509269   11287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:21.509402   11287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:21:21.509620   11287 out.go:298] Setting JSON to false
	I0327 11:21:21.509633   11287 mustload.go:65] Loading cluster: default-k8s-diff-port-657000
	I0327 11:21:21.509834   11287 config.go:182] Loaded profile config "default-k8s-diff-port-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 11:21:21.514511   11287 out.go:177] * The control-plane node default-k8s-diff-port-657000 host is not running: state=Stopped
	I0327 11:21:21.518492   11287 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-657000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-657000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (30.458958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (30.459708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-071000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000: exit status 7 (32.529375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-071000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-071000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-071000 --alsologtostderr -v=1: exit status 83 (42.822416ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-071000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-071000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 11:21:23.792533   11317 out.go:291] Setting OutFile to fd 1 ...
	I0327 11:21:23.792675   11317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:23.792678   11317 out.go:304] Setting ErrFile to fd 2...
	I0327 11:21:23.792681   11317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 11:21:23.792806   11317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 11:21:23.793028   11317 out.go:298] Setting JSON to false
	I0327 11:21:23.793036   11317 mustload.go:65] Loading cluster: newest-cni-071000
	I0327 11:21:23.793237   11317 config.go:182] Loaded profile config "newest-cni-071000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 11:21:23.797465   11317 out.go:177] * The control-plane node newest-cni-071000 host is not running: state=Stopped
	I0327 11:21:23.800527   11317 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-071000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-071000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000: exit status 7 (32.156208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-071000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000: exit status 7 (32.627333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-071000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.29.3/json-events 123.3
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.23
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.30.0-beta.0/json-events 28.42
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.17
48 TestErrorSpam/start 0.4
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.13
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 8.55
55 TestFunctional/serial/CopySyncFile 0.01
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.27
64 TestFunctional/serial/CacheCmd/cache/add_local 1.22
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.46
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 6.52
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.35
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.33
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 5.42
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.4
267 TestNoKubernetes/serial/Stop 2.11
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
278 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
284 TestStartStop/group/old-k8s-version/serial/Stop 3.03
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
297 TestStartStop/group/no-preload/serial/Stop 1.99
298 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
302 TestStartStop/group/embed-certs/serial/Stop 2.02
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.61
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
322 TestStartStop/group/newest-cni/serial/Stop 3.1
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-853000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-853000: exit status 85 (102.348292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:51 PDT |          |
	|         | -p download-only-853000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=qemu2                 |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 10:51:24
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 10:51:24.276129    6478 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:51:24.276285    6478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:51:24.276288    6478 out.go:304] Setting ErrFile to fd 2...
	I0327 10:51:24.276291    6478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:51:24.276409    6478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	W0327 10:51:24.276473    6478 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18517-6013/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18517-6013/.minikube/config/config.json: no such file or directory
	I0327 10:51:24.277726    6478 out.go:298] Setting JSON to true
	I0327 10:51:24.295447    6478 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4857,"bootTime":1711557027,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:51:24.295515    6478 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:51:24.307821    6478 out.go:97] [download-only-853000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 10:51:24.311640    6478 out.go:169] MINIKUBE_LOCATION=18517
	W0327 10:51:24.307964    6478 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 10:51:24.307989    6478 notify.go:220] Checking for updates...
	I0327 10:51:24.335875    6478 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:51:24.339720    6478 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:51:24.343644    6478 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:51:24.349974    6478 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	W0327 10:51:24.356722    6478 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 10:51:24.356941    6478 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:51:24.360649    6478 out.go:97] Using the qemu2 driver based on user configuration
	I0327 10:51:24.360670    6478 start.go:297] selected driver: qemu2
	I0327 10:51:24.360688    6478 start.go:901] validating driver "qemu2" against <nil>
	I0327 10:51:24.360756    6478 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 10:51:24.363648    6478 out.go:169] Automatically selected the socket_vmnet network
	I0327 10:51:24.370571    6478 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 10:51:24.370693    6478 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 10:51:24.370776    6478 cni.go:84] Creating CNI manager for ""
	I0327 10:51:24.370797    6478 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 10:51:24.370871    6478 start.go:340] cluster config:
	{Name:download-only-853000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0327 10:51:24.376461    6478 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 10:51:24.380642    6478 out.go:97] Downloading VM boot image ...
	I0327 10:51:24.380687    6478 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/iso/arm64/minikube-v1.33.0-beta.0-arm64.iso
	I0327 10:51:43.555953    6478 out.go:97] Starting "download-only-853000" primary control-plane node in "download-only-853000" cluster
	I0327 10:51:43.555984    6478 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 10:51:43.842320    6478 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 10:51:43.842370    6478 cache.go:56] Caching tarball of preloaded images
	I0327 10:51:43.843088    6478 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 10:51:43.847621    6478 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 10:51:43.847652    6478 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:51:44.524845    6478 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 10:52:05.454718    6478 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:52:05.454903    6478 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:52:06.156780    6478 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 10:52:06.156982    6478 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/download-only-853000/config.json ...
	I0327 10:52:06.157001    6478 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/download-only-853000/config.json: {Name:mk9a4ae1f3d56b5a01555ebba0017eed2a21b775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 10:52:06.158126    6478 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 10:52:06.158560    6478 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0327 10:52:06.807429    6478 out.go:169] 
	W0327 10:52:06.812535    6478 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18517-6013/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1083df220 0x1083df220 0x1083df220 0x1083df220 0x1083df220 0x1083df220 0x1083df220] Decompressors:map[bz2:0x1400063f830 gz:0x1400063f838 tar:0x1400063f7e0 tar.bz2:0x1400063f7f0 tar.gz:0x1400063f800 tar.xz:0x1400063f810 tar.zst:0x1400063f820 tbz2:0x1400063f7f0 tgz:0x1400063f800 txz:0x1400063f810 tzst:0x1400063f820 xz:0x1400063f840 zip:0x1400063f850 zst:0x1400063f848] Getters:map[file:0x14000526960 http:0x140000b4a50 https:0x140000b4aa0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0327 10:52:06.812559    6478 out_reason.go:110] 
	W0327 10:52:06.820392    6478 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 10:52:06.824407    6478 out.go:169] 
	
	
	* The control-plane node download-only-853000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-853000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-853000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (123.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 : (2m3.304200459s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (123.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-089000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-089000: exit status 85 (80.984917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:51 PDT |                     |
	|         | -p download-only-853000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
	| delete  | -p download-only-853000        | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
	| start   | -o=json --download-only        | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT |                     |
	|         | -p download-only-089000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 10:52:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 10:52:07.510540    6565 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:52:07.510709    6565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:52:07.510712    6565 out.go:304] Setting ErrFile to fd 2...
	I0327 10:52:07.510714    6565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:52:07.510833    6565 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:52:07.511915    6565 out.go:298] Setting JSON to true
	I0327 10:52:07.527922    6565 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4900,"bootTime":1711557027,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:52:07.527980    6565 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:52:07.532945    6565 out.go:97] [download-only-089000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 10:52:07.537029    6565 out.go:169] MINIKUBE_LOCATION=18517
	I0327 10:52:07.533020    6565 notify.go:220] Checking for updates...
	I0327 10:52:07.543983    6565 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:52:07.546997    6565 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:52:07.549992    6565 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:52:07.553010    6565 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	W0327 10:52:07.559000    6565 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 10:52:07.559196    6565 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:52:07.561987    6565 out.go:97] Using the qemu2 driver based on user configuration
	I0327 10:52:07.561997    6565 start.go:297] selected driver: qemu2
	I0327 10:52:07.562001    6565 start.go:901] validating driver "qemu2" against <nil>
	I0327 10:52:07.562056    6565 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 10:52:07.564973    6565 out.go:169] Automatically selected the socket_vmnet network
	I0327 10:52:07.569992    6565 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 10:52:07.570087    6565 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 10:52:07.570132    6565 cni.go:84] Creating CNI manager for ""
	I0327 10:52:07.570141    6565 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 10:52:07.570146    6565 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 10:52:07.570184    6565 start.go:340] cluster config:
	{Name:download-only-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:52:07.574447    6565 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 10:52:07.576972    6565 out.go:97] Starting "download-only-089000" primary control-plane node in "download-only-089000" cluster
	I0327 10:52:07.576981    6565 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 10:52:08.222680    6565 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 10:52:08.222807    6565 cache.go:56] Caching tarball of preloaded images
	I0327 10:52:08.223522    6565 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 10:52:08.229130    6565 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0327 10:52:08.229162    6565 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:52:08.834617    6565 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4?checksum=md5:c0bb0715201da444334d968c298f45eb -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 10:54:02.317572    6565 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:54:02.317757    6565 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:54:02.875197    6565 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 10:54:02.875400    6565 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/download-only-089000/config.json ...
	I0327 10:54:02.875416    6565 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/download-only-089000/config.json: {Name:mk29ff7b11baf2cb6484e167e739b61f6b435571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 10:54:02.875677    6565 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 10:54:02.875796    6565 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/darwin/arm64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-089000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-089000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-089000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (28.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-275000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-275000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 : (28.421693833s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (28.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-275000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-275000: exit status 85 (84.893667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:51 PDT |                     |
	|         | -p download-only-853000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
	| delete  | -p download-only-853000             | download-only-853000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT | 27 Mar 24 10:52 PDT |
	| start   | -o=json --download-only             | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:52 PDT |                     |
	|         | -p download-only-089000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| delete  | -p download-only-089000             | download-only-089000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT | 27 Mar 24 10:54 PDT |
	| start   | -o=json --download-only             | download-only-275000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 10:54 PDT |                     |
	|         | -p download-only-275000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 10:54:11
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 10:54:11.367358    6736 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:54:11.367487    6736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:54:11.367491    6736 out.go:304] Setting ErrFile to fd 2...
	I0327 10:54:11.367494    6736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:54:11.367626    6736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:54:11.368647    6736 out.go:298] Setting JSON to true
	I0327 10:54:11.384759    6736 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5024,"bootTime":1711557027,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:54:11.384819    6736 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:54:11.388266    6736 out.go:97] [download-only-275000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 10:54:11.392151    6736 out.go:169] MINIKUBE_LOCATION=18517
	I0327 10:54:11.388362    6736 notify.go:220] Checking for updates...
	I0327 10:54:11.400192    6736 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:54:11.403213    6736 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:54:11.406204    6736 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:54:11.413130    6736 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	W0327 10:54:11.419221    6736 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 10:54:11.419423    6736 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:54:11.423162    6736 out.go:97] Using the qemu2 driver based on user configuration
	I0327 10:54:11.423169    6736 start.go:297] selected driver: qemu2
	I0327 10:54:11.423172    6736 start.go:901] validating driver "qemu2" against <nil>
	I0327 10:54:11.423206    6736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 10:54:11.426154    6736 out.go:169] Automatically selected the socket_vmnet network
	I0327 10:54:11.431370    6736 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 10:54:11.431465    6736 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 10:54:11.431504    6736 cni.go:84] Creating CNI manager for ""
	I0327 10:54:11.431513    6736 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 10:54:11.431525    6736 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 10:54:11.431570    6736 start.go:340] cluster config:
	{Name:download-only-275000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:54:11.435911    6736 iso.go:125] acquiring lock: {Name:mk3dcd3a560822e126b5189815ab45486a955dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 10:54:11.439213    6736 out.go:97] Starting "download-only-275000" primary control-plane node in "download-only-275000" cluster
	I0327 10:54:11.439223    6736 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 10:54:11.976776    6736 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 10:54:11.976833    6736 cache.go:56] Caching tarball of preloaded images
	I0327 10:54:11.977560    6736 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 10:54:11.982506    6736 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 10:54:11.982530    6736 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:54:12.555932    6736 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:e2591d3d8d44bfdea8fdcdf9682f34bf -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 10:54:31.144679    6736 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:54:31.144836    6736 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 10:54:31.689602    6736 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 10:54:31.689792    6736 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/download-only-275000/config.json ...
	I0327 10:54:31.689808    6736 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18517-6013/.minikube/profiles/download-only-275000/config.json: {Name:mkb1dff896bce211945d7e4908f144fa3d18c8db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 10:54:31.690092    6736 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 10:54:31.690209    6736 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18517-6013/.minikube/cache/darwin/arm64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-275000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-275000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-275000
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-443000 --alsologtostderr --binary-mirror http://127.0.0.1:50912 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-443000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-443000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-723000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-723000: exit status 85 (66.300833ms)

                                                
                                                
-- stdout --
	* Profile "addons-723000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-723000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-723000: exit status 85 (62.216084ms)

                                                
                                                
-- stdout --
	* Profile "addons-723000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.17s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status: exit status 7 (33.731542ms)

                                                
                                                
-- stdout --
	nospam-472000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status: exit status 7 (32.316542ms)

                                                
                                                
-- stdout --
	nospam-472000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status: exit status 7 (32.124583ms)

                                                
                                                
-- stdout --
	nospam-472000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause: exit status 83 (41.784667ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-472000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-472000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause: exit status 83 (40.910666ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-472000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-472000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause: exit status 83 (42.849209ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-472000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-472000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause: exit status 83 (40.34775ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-472000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-472000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause: exit status 83 (41.227208ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-472000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-472000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause: exit status 83 (41.743875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-472000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-472000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (8.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop: (3.37368975s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop: (1.897701917s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-472000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-472000 stop: (3.2801785s)
--- PASS: TestErrorSpam/stop (8.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18517-6013/.minikube/files/etc/test/nested/copy/6476/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-821000 cache add registry.k8s.io/pause:3.1: (2.237339875s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-821000 cache add registry.k8s.io/pause:3.3: (2.21098625s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-821000 cache add registry.k8s.io/pause:latest: (1.822987958s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1321572538/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 cache add minikube-local-cache-test:functional-821000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 cache delete minikube-local-cache-test:functional-821000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-821000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 config get cpus: exit status 14 (32.523292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 config get cpus: exit status 14 (37.852792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-821000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-821000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (160.769166ms)

                                                
                                                
-- stdout --
	* [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 10:56:37.434017    7487 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:56:37.434166    7487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:37.434175    7487 out.go:304] Setting ErrFile to fd 2...
	I0327 10:56:37.434178    7487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:37.434344    7487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:56:37.435616    7487 out.go:298] Setting JSON to false
	I0327 10:56:37.454814    7487 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5170,"bootTime":1711557027,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:56:37.454877    7487 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:56:37.458880    7487 out.go:177] * [functional-821000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 10:56:37.467535    7487 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 10:56:37.467570    7487 notify.go:220] Checking for updates...
	I0327 10:56:37.471423    7487 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:56:37.474522    7487 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:56:37.477551    7487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:56:37.480594    7487 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 10:56:37.483572    7487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 10:56:37.486787    7487 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 10:56:37.487050    7487 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:56:37.491521    7487 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 10:56:37.498574    7487 start.go:297] selected driver: qemu2
	I0327 10:56:37.498580    7487 start.go:901] validating driver "qemu2" against &{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:funct
ional-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:56:37.498633    7487 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 10:56:37.504446    7487 out.go:177] 
	W0327 10:56:37.508519    7487 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0327 10:56:37.512596    7487 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-821000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-821000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-821000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.993542ms)

                                                
                                                
-- stdout --
	* [functional-821000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 10:56:37.667556    7498 out.go:291] Setting OutFile to fd 1 ...
	I0327 10:56:37.667671    7498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:37.667674    7498 out.go:304] Setting ErrFile to fd 2...
	I0327 10:56:37.667676    7498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 10:56:37.667810    7498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18517-6013/.minikube/bin
	I0327 10:56:37.669203    7498 out.go:298] Setting JSON to false
	I0327 10:56:37.686055    7498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5170,"bootTime":1711557027,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0327 10:56:37.686137    7498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 10:56:37.690595    7498 out.go:177] * [functional-821000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	I0327 10:56:37.697538    7498 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 10:56:37.697614    7498 notify.go:220] Checking for updates...
	I0327 10:56:37.701579    7498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	I0327 10:56:37.705550    7498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 10:56:37.708558    7498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 10:56:37.711539    7498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	I0327 10:56:37.714542    7498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 10:56:37.717856    7498 config.go:182] Loaded profile config "functional-821000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 10:56:37.718105    7498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 10:56:37.722597    7498 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0327 10:56:37.729520    7498 start.go:297] selected driver: qemu2
	I0327 10:56:37.729526    7498 start.go:901] validating driver "qemu2" against &{Name:functional-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:funct
ional-821000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 10:56:37.729601    7498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 10:56:37.736403    7498 out.go:177] 
	W0327 10:56:37.740563    7498 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0327 10:56:37.744504    7498 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.462001s)
--- PASS: TestFunctional/parallel/License (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (6.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (6.483012208s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-821000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (6.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image rm gcr.io/google-containers/addon-resizer:functional-821000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-821000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 image save --daemon gcr.io/google-containers/addon-resizer:functional-821000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-821000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "73.135417ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.764459ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "72.183958ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.559083ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.0121885s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-821000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-821000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-821000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-821000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-869000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-869000 --output=json --user=testUser: (3.352980541s)
--- PASS: TestJSONOutput/stop/Command (3.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-924000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-924000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.1035ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"57efa60d-3a54-4e2d-8924-b816a5971b0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-924000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc15beb0-9fe5-4c14-a5ca-779ae0c36ffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18517"}}
	{"specversion":"1.0","id":"321fda64-78f0-4d1d-b618-d05136e20572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig"}}
	{"specversion":"1.0","id":"67336a91-a37e-4c88-b34e-5198e247ec62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"04528cdb-dc89-4d72-be5e-d2f096a42dd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aad4795a-0582-4a73-a246-f9a7aa99b19a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube"}}
	{"specversion":"1.0","id":"479808fb-e80d-4e46-9814-f77ca16e2817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0ce18751-baf4-4f29-b4a5-c35f889252d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-924000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-924000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-902000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (105.571542ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-902000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18517-6013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18517-6013/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-902000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-902000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.694583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-902000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-902000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.715281625s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.680552959s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-902000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-902000: (2.108052125s)
--- PASS: TestNoKubernetes/serial/Stop (2.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-902000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-902000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (48.5415ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-902000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-902000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-042000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-316000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-316000 --alsologtostderr -v=3: (3.026134083s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-316000 -n old-k8s-version-316000: exit status 7 (58.508208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-316000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-070000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-070000 --alsologtostderr -v=3: (1.990773958s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-070000 -n no-preload-070000: exit status 7 (41.41125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-070000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-246000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-246000 --alsologtostderr -v=3: (2.021522625s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-246000 -n embed-certs-246000: exit status 7 (54.661333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-246000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-657000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-657000 --alsologtostderr -v=3: (3.606257125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-071000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-071000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-071000 --alsologtostderr -v=3: (3.1034045s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-657000 -n default-k8s-diff-port-657000: exit status 7 (57.738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-657000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-071000 -n newest-cni-071000: exit status 7 (57.122917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-071000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750079580/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711562159139135000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750079580/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711562159139135000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750079580/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711562159139135000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750079580/001/test-1711562159139135000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (59.654916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.294292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.512459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (96.459167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.05775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.5675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.921417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.955834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo umount -f /mount-9p": exit status 83 (48.907958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-821000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750079580/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (14.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4078061313/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (63.370667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.874625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.576625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.348708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.531791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.303875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.537167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "sudo umount -f /mount-9p": exit status 83 (47.825292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-821000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4078061313/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup868547388/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup868547388/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup868547388/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1: exit status 83 (83.824833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1: exit status 83 (86.186917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1: exit status 83 (86.91ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1: exit status 83 (87.987791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1: exit status 83 (88.414333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1: exit status 83 (89.685792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-821000 ssh "findmnt -T" /mount1: exit status 83 (86.985834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-821000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-821000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup868547388/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup868547388/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-821000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup868547388/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.18s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-314000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-314000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-314000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-314000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-314000"

                                                
                                                
----------------------- debugLogs end: cilium-314000 [took: 2.247412042s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-314000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-314000
--- SKIP: TestNetworkPlugins/group/cilium (2.49s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-655000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-655000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard